Compare commits

...

37 Commits

Author SHA1 Message Date
Dennis Schwerdel 98a09fea2e Bugfix 2018-03-08 20:05:27 +01:00
Dennis Schwerdel 2f3c97a043 stats & dups 2018-03-08 15:21:11 +01:00
Dennis Schwerdel 56c916f585 more stats 2018-03-07 00:36:44 +01:00
Dennis Schwerdel d47edee08f Stats 2018-03-06 22:53:35 +01:00
Dennis Schwerdel 224bf1d25c Index stats 2018-03-06 22:22:52 +01:00
Dennis Schwerdel 3b7bb52620 libsodium23 deb 2018-03-04 23:09:41 +01:00
Dennis Schwerdel 08b1f118f1 New comparison 2018-03-04 17:51:55 +01:00
Dennis Schwerdel d93e4c9cb6 new comparison commands 2018-03-04 16:29:58 +01:00
Dennis Schwerdel 9bd518655d new comparison 2018-03-03 23:58:16 +01:00
Dennis Schwerdel bdb13b96cf Updated copyright 2018-03-03 17:25:38 +01:00
Dennis Schwerdel 1102600893 Some changes 2018-03-03 17:25:05 +01:00
Dennis Schwerdel 2f6c3b239e Update 2018-02-25 01:03:18 +01:00
Dennis Schwerdel 9ca22008c7 New strings 2018-02-24 23:35:12 +01:00
Dennis Schwerdel 6f9611bba6 All in one module 2018-02-24 23:28:18 +01:00
Dennis Schwerdel a81aaae637 strip 2018-02-24 15:17:02 +01:00
Dennis Schwerdel 98c352814f Tanslation 2018-02-24 14:55:56 +01:00
Dennis Schwerdel ce3223b5ea More files 2018-02-24 13:55:24 +01:00
Dennis Schwerdel 8911c8af6d Translation infrastructure 2018-02-24 13:19:51 +01:00
Dennis Schwerdel 24e28e6bcc First translation code 2018-02-21 22:49:21 +01:00
Dennis Schwerdel aa6a450e43 Updated dependencies 2 2018-02-19 23:31:58 +01:00
Dennis Schwerdel c87458d981 Updated dependencies 1 2018-02-19 22:42:44 +01:00
Dennis Schwerdel 618a858506 Make clippy happy 2018-02-19 22:30:59 +01:00
Dennis Schwerdel fb73e29a20 Some minor changes 2018-02-19 21:18:47 +01:00
Dennis Schwerdel b2331c61fd Repository readme 2017-08-04 20:35:04 +02:00
Dennis Schwerdel 5fe41127fc More tests, forcing cut-point-skipping, using black_box 2017-08-03 07:34:16 +02:00
Dennis Schwerdel b4e6b34bbe Configurable cut-point-skipping in fastcdc 2017-08-02 23:36:01 +02:00
Dennis Schwerdel cbfcc255af Simplified code 2017-08-02 23:12:46 +02:00
Dennis Schwerdel 5ad90f2929 Test chunker output 2017-08-02 22:19:01 +02:00
Dennis Schwerdel 837df8bbd3 Also including the first min_size bytes in hash (oops), performance improvements 2017-08-02 22:18:37 +02:00
Dennis Schwerdel 54e2329228 Badges 2017-07-31 20:23:46 +02:00
Dennis Schwerdel 5992a33c3f Coverage only on nightly 2017-07-30 22:09:40 +02:00
Dennis Schwerdel ccccd7da0c Added libfuse 2017-07-30 21:37:59 +02:00
Dennis Schwerdel c6480de13c Next try on travis ci 2017-07-30 21:25:09 +02:00
Dennis Schwerdel b62ab95503 Compiling libsodium18 2017-07-30 21:12:28 +02:00
Dennis Schwerdel c954b8489c Added libsodium to travis 2017-07-30 21:09:05 +02:00
Dennis Schwerdel e4a9b3a411 Updated dependencies 2017-07-30 21:05:04 +02:00
Dennis Schwerdel fd4798b35c Added travis config 2017-07-30 20:59:23 +02:00
89 changed files with 8751 additions and 1592 deletions

3
.gitignore vendored
View File

@ -7,3 +7,6 @@ excludes
._*
.~*
docs/logo
lang/*.mo
lang/default.pot
.idea

36
.travis.yml Normal file
View File

@ -0,0 +1,36 @@
language: rust
dist: trusty
addons:
apt:
packages:
- libssl-dev
- libfuse-dev
install:
- wget https://github.com/jedisct1/libsodium/releases/download/1.0.8/libsodium-1.0.8.tar.gz
- tar xvfz libsodium-1.0.8.tar.gz
- cd libsodium-1.0.8 && ./configure --prefix=$HOME/installed_libs && make && make install && cd ..
- git clone https://github.com/quixdb/squash libsquash && cd libsquash && git checkout 5ea579cae2324f9e814cb3d88aa589dff312e9e2 && ./autogen.sh --prefix=$HOME/installed_libs --disable-external && make && make install && cd ..
- export PKG_CONFIG_PATH=$HOME/installed_libs/lib/pkgconfig:$PKG_CONFIG_PATH
- export LD_LIBRARY_PATH=$HOME/installed_libs/lib:$LD_LIBRARY_PATH
cache:
- cargo
- ccache
rust:
- stable
- beta
- nightly
matrix:
allow_failures:
- rust:
- beta
- stable
script:
- cargo clean
- cargo build
- cargo test
after_success: |
if [[ "$TRAVIS_RUST_VERSION" == nightly ]]; then
cargo install cargo-tarpaulin
cargo tarpaulin --ciserver travis-ci --coveralls $TRAVIS_JOB_ID
fi

View File

@ -3,6 +3,20 @@
This project follows [semantic versioning](http://semver.org).
### UNRELEASED
* [added] Translation infrastructure (**requires nightly rust**)
* [added] Checking hashes of chunks in check --bundle-data
* [added] Debian packet for libsodium23
* [modified] Updated dependencies
* [modified] Updated copyright date
* [modified] Moved all code into one crate for easier translation
* [modified] Compression ratio is now displayed in a clearer format
* [fixed] Also including the first min_size bytes in hash
* [fixed] Fixed some texts in manpages
* [fixed] Calling strip on final binaries
* [fixed] Fixed bug that caused repairs to miss some errors
### v0.4.0 (2017-07-21)
* [added] Added `copy` subcommand
* [added] Added support for xattrs in fuse mount

530
Cargo.lock generated
View File

@ -1,61 +1,35 @@
[root]
name = "zvault"
version = "0.4.0"
dependencies = [
"ansi_term 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)",
"blake2-rfc 0.2.17 (registry+https://github.com/rust-lang/crates.io-index)",
"byteorder 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"chrono 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"chunking 0.1.0",
"clap 2.25.0 (registry+https://github.com/rust-lang/crates.io-index)",
"crossbeam 0.2.10 (registry+https://github.com/rust-lang/crates.io-index)",
"filetime 0.1.10 (registry+https://github.com/rust-lang/crates.io-index)",
"fuse 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"index 0.1.0",
"lazy_static 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.26 (registry+https://github.com/rust-lang/crates.io-index)",
"libsodium-sys 0.0.15 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)",
"murmurhash3 0.0.5 (registry+https://github.com/rust-lang/crates.io-index)",
"pbr 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
"pkg-config 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)",
"quick-error 1.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
"rand 0.3.15 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
"rmp-serde 0.13.4 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.10 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_bytes 0.10.1 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_utils 0.6.0 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_yaml 0.7.1 (registry+https://github.com/rust-lang/crates.io-index)",
"sodiumoxide 0.0.15 (registry+https://github.com/rust-lang/crates.io-index)",
"squash-sys 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)",
"tar 0.4.13 (registry+https://github.com/rust-lang/crates.io-index)",
"time 0.1.38 (registry+https://github.com/rust-lang/crates.io-index)",
"users 0.5.2 (registry+https://github.com/rust-lang/crates.io-index)",
"xattr 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "aho-corasick"
version = "0.6.3"
version = "0.6.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"memchr 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "ansi_term"
version = "0.9.0"
version = "0.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "arrayvec"
version = "0.4.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"nodrop 0.1.12 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "atty"
version = "0.2.2"
version = "0.2.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"kernel32-sys 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.26 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.39 (registry+https://github.com/rust-lang/crates.io-index)",
"termion 1.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@ -65,20 +39,26 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "bitflags"
version = "0.9.1"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "blake2-rfc"
version = "0.2.17"
version = "0.2.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"constant_time_eq 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"arrayvec 0.4.7 (registry+https://github.com/rust-lang/crates.io-index)",
"constant_time_eq 0.1.3 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "byteorder"
version = "1.1.0"
version = "1.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "cfg-if"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
@ -86,69 +66,68 @@ name = "chrono"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"num 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)",
"time 0.1.38 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "chunking"
version = "0.1.0"
dependencies = [
"quick-error 1.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
"num 0.1.42 (registry+https://github.com/rust-lang/crates.io-index)",
"time 0.1.39 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "clap"
version = "2.25.0"
version = "2.31.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"ansi_term 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)",
"atty 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
"bitflags 0.9.1 (registry+https://github.com/rust-lang/crates.io-index)",
"strsim 0.6.0 (registry+https://github.com/rust-lang/crates.io-index)",
"term_size 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"textwrap 0.6.0 (registry+https://github.com/rust-lang/crates.io-index)",
"unicode-segmentation 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"ansi_term 0.11.0 (registry+https://github.com/rust-lang/crates.io-index)",
"atty 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)",
"bitflags 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"strsim 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)",
"textwrap 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)",
"unicode-width 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
"vec_map 0.8.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "constant_time_eq"
version = "0.1.2"
version = "0.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "crossbeam"
version = "0.2.10"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "filetime"
version = "0.1.10"
version = "0.1.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.26 (registry+https://github.com/rust-lang/crates.io-index)",
"cfg-if 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.39 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_syscall 0.1.37 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "fuchsia-zircon"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"bitflags 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"fuchsia-zircon-sys 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "fuchsia-zircon-sys"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "fuse"
version = "0.3.0"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.26 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.39 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)",
"pkg-config 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)",
"thread-scoped 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
"time 0.1.38 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "index"
version = "0.1.0"
dependencies = [
"mmap 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"quick-error 1.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
"time 0.1.39 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@ -162,7 +141,12 @@ dependencies = [
[[package]]
name = "lazy_static"
version = "0.2.8"
version = "0.2.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "lazy_static"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
@ -172,39 +156,56 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "libc"
version = "0.2.26"
version = "0.2.39"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "libsodium-sys"
version = "0.0.15"
version = "0.0.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.26 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.39 (registry+https://github.com/rust-lang/crates.io-index)",
"pkg-config 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "linked-hash-map"
version = "0.3.0"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "linked-hash-map"
version = "0.4.2"
name = "locale_config"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"kernel32-sys 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 0.2.6 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "log"
version = "0.3.8"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"log 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "log"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cfg-if 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "memchr"
version = "1.0.1"
version = "2.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.26 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.39 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@ -213,7 +214,7 @@ version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.1.12 (registry+https://github.com/rust-lang/crates.io-index)",
"tempdir 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
"tempdir 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@ -221,36 +222,49 @@ name = "murmurhash3"
version = "0.0.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "nodrop"
version = "0.1.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "num"
version = "0.1.40"
version = "0.1.42"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"num-integer 0.1.35 (registry+https://github.com/rust-lang/crates.io-index)",
"num-iter 0.1.34 (registry+https://github.com/rust-lang/crates.io-index)",
"num-traits 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)",
"num-integer 0.1.36 (registry+https://github.com/rust-lang/crates.io-index)",
"num-iter 0.1.35 (registry+https://github.com/rust-lang/crates.io-index)",
"num-traits 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "num-integer"
version = "0.1.35"
version = "0.1.36"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"num-traits 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)",
"num-traits 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "num-iter"
version = "0.1.34"
version = "0.1.35"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"num-integer 0.1.35 (registry+https://github.com/rust-lang/crates.io-index)",
"num-traits 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)",
"num-integer 0.1.36 (registry+https://github.com/rust-lang/crates.io-index)",
"num-traits 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "num-traits"
version = "0.1.40"
version = "0.1.43"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"num-traits 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "num-traits"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
@ -259,8 +273,8 @@ version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"kernel32-sys 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.26 (registry+https://github.com/rust-lang/crates.io-index)",
"time 0.1.38 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.39 (registry+https://github.com/rust-lang/crates.io-index)",
"time 0.1.39 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)",
]
@ -271,69 +285,93 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "quick-error"
version = "1.2.0"
version = "1.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "rand"
version = "0.3.15"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.26 (registry+https://github.com/rust-lang/crates.io-index)",
"fuchsia-zircon 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.39 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "redox_syscall"
version = "0.1.26"
version = "0.1.37"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "regex"
version = "0.2.2"
name = "redox_termios"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"aho-corasick 0.6.3 (registry+https://github.com/rust-lang/crates.io-index)",
"memchr 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"regex-syntax 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)",
"thread_local 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_syscall 0.1.37 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "regex"
version = "0.2.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"aho-corasick 0.6.4 (registry+https://github.com/rust-lang/crates.io-index)",
"memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
"regex-syntax 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"thread_local 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
"utf8-ranges 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "regex-syntax"
version = "0.4.1"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "rmp"
version = "0.8.6"
name = "remove_dir_all"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"byteorder 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"num-traits 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)",
"kernel32-sys 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rmp"
version = "0.8.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"byteorder 1.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
"num-traits 0.1.43 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rmp-serde"
version = "0.13.4"
version = "0.13.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"byteorder 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"rmp 0.8.6 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.10 (registry+https://github.com/rust-lang/crates.io-index)",
"byteorder 1.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
"rmp 0.8.7 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.27 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "runtime-fmt"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "serde"
version = "1.0.10"
version = "1.0.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "serde_bytes"
version = "0.10.1"
version = "0.10.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"serde 1.0.10 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.27 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@ -341,29 +379,29 @@ name = "serde_utils"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"serde 1.0.10 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_bytes 0.10.1 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.27 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_bytes 0.10.3 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "serde_yaml"
version = "0.7.1"
version = "0.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"linked-hash-map 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"num-traits 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.10 (registry+https://github.com/rust-lang/crates.io-index)",
"yaml-rust 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
"linked-hash-map 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
"num-traits 0.1.43 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.27 (registry+https://github.com/rust-lang/crates.io-index)",
"yaml-rust 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "sodiumoxide"
version = "0.0.15"
version = "0.0.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.26 (registry+https://github.com/rust-lang/crates.io-index)",
"libsodium-sys 0.0.15 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.10 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.39 (registry+https://github.com/rust-lang/crates.io-index)",
"libsodium-sys 0.0.16 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.27 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@ -372,49 +410,50 @@ version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"bitflags 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.26 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.39 (registry+https://github.com/rust-lang/crates.io-index)",
"pkg-config 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "strsim"
version = "0.6.0"
version = "0.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "tar"
version = "0.4.13"
version = "0.4.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"filetime 0.1.10 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.26 (registry+https://github.com/rust-lang/crates.io-index)",
"filetime 0.1.15 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.39 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_syscall 0.1.37 (registry+https://github.com/rust-lang/crates.io-index)",
"xattr 0.1.11 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "tempdir"
version = "0.3.5"
version = "0.3.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"rand 0.3.15 (registry+https://github.com/rust-lang/crates.io-index)",
"rand 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"remove_dir_all 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "term_size"
version = "0.3.0"
name = "termion"
version = "1.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"kernel32-sys 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.26 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.39 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_syscall 0.1.37 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_termios 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "textwrap"
version = "0.6.0"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"term_size 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"unicode-width 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
@ -425,29 +464,23 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "thread_local"
version = "0.3.4"
version = "0.3.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"lazy_static 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
"unreachable 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "time"
version = "0.1.38"
version = "0.1.39"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"kernel32-sys 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.26 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_syscall 0.1.26 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.39 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_syscall 0.1.37 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "unicode-segmentation"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "unicode-width"
version = "0.1.4"
@ -463,10 +496,10 @@ dependencies = [
[[package]]
name = "users"
version = "0.5.2"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.26 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.39 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
@ -489,96 +522,165 @@ name = "winapi"
version = "0.2.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "winapi"
version = "0.3.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi-i686-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi-x86_64-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "winapi-build"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "winapi-i686-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "winapi-x86_64-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "xattr"
version = "0.1.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.26 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.39 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "xattr"
version = "0.2.0"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.26 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.39 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "yaml-rust"
version = "0.3.5"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"linked-hash-map 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"linked-hash-map 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "zvault"
version = "0.5.0"
dependencies = [
"ansi_term 0.11.0 (registry+https://github.com/rust-lang/crates.io-index)",
"blake2-rfc 0.2.18 (registry+https://github.com/rust-lang/crates.io-index)",
"byteorder 1.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
"chrono 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"clap 2.31.1 (registry+https://github.com/rust-lang/crates.io-index)",
"crossbeam 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)",
"filetime 0.1.15 (registry+https://github.com/rust-lang/crates.io-index)",
"fuse 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.39 (registry+https://github.com/rust-lang/crates.io-index)",
"libsodium-sys 0.0.16 (registry+https://github.com/rust-lang/crates.io-index)",
"locale_config 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)",
"mmap 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"murmurhash3 0.0.5 (registry+https://github.com/rust-lang/crates.io-index)",
"pbr 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
"quick-error 1.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
"rand 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 0.2.6 (registry+https://github.com/rust-lang/crates.io-index)",
"rmp-serde 0.13.7 (registry+https://github.com/rust-lang/crates.io-index)",
"runtime-fmt 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"serde 1.0.27 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_bytes 0.10.3 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_utils 0.6.0 (registry+https://github.com/rust-lang/crates.io-index)",
"serde_yaml 0.7.3 (registry+https://github.com/rust-lang/crates.io-index)",
"sodiumoxide 0.0.16 (registry+https://github.com/rust-lang/crates.io-index)",
"squash-sys 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)",
"tar 0.4.14 (registry+https://github.com/rust-lang/crates.io-index)",
"time 0.1.39 (registry+https://github.com/rust-lang/crates.io-index)",
"users 0.6.0 (registry+https://github.com/rust-lang/crates.io-index)",
"xattr 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[metadata]
"checksum aho-corasick 0.6.3 (registry+https://github.com/rust-lang/crates.io-index)" = "500909c4f87a9e52355b26626d890833e9e1d53ac566db76c36faa984b889699"
"checksum ansi_term 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)" = "23ac7c30002a5accbf7e8987d0632fa6de155b7c3d39d0067317a391e00a2ef6"
"checksum atty 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)" = "d912da0db7fa85514874458ca3651fe2cddace8d0b0505571dbdcd41ab490159"
"checksum aho-corasick 0.6.4 (registry+https://github.com/rust-lang/crates.io-index)" = "d6531d44de723825aa81398a6415283229725a00fa30713812ab9323faa82fc4"
"checksum ansi_term 0.11.0 (registry+https://github.com/rust-lang/crates.io-index)" = "ee49baf6cb617b853aa8d93bf420db2383fab46d314482ca2803b40d5fde979b"
"checksum arrayvec 0.4.7 (registry+https://github.com/rust-lang/crates.io-index)" = "a1e964f9e24d588183fcb43503abda40d288c8657dfc27311516ce2f05675aef"
"checksum atty 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)" = "af80143d6f7608d746df1520709e5d141c96f240b0e62b0aa41bdfb53374d9d4"
"checksum bitflags 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)" = "aad18937a628ec6abcd26d1489012cc0e18c21798210f491af69ded9b881106d"
"checksum bitflags 0.9.1 (registry+https://github.com/rust-lang/crates.io-index)" = "4efd02e230a02e18f92fc2735f44597385ed02ad8f831e7c1c1156ee5e1ab3a5"
"checksum blake2-rfc 0.2.17 (registry+https://github.com/rust-lang/crates.io-index)" = "0c6a476f32fef3402f1161f89d0d39822809627754a126f8441ff2a9d45e2d59"
"checksum byteorder 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "ff81738b726f5d099632ceaffe7fb65b90212e8dce59d518729e7e8634032d3d"
"checksum bitflags 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)" = "b3c30d3802dfb7281680d6285f2ccdaa8c2d8fee41f93805dba5c4cf50dc23cf"
"checksum blake2-rfc 0.2.18 (registry+https://github.com/rust-lang/crates.io-index)" = "5d6d530bdd2d52966a6d03b7a964add7ae1a288d25214066fd4b600f0f796400"
"checksum byteorder 1.2.1 (registry+https://github.com/rust-lang/crates.io-index)" = "652805b7e73fada9d85e9a6682a4abd490cb52d96aeecc12e33a0de34dfd0d23"
"checksum cfg-if 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)" = "d4c819a1287eb618df47cc647173c5c4c66ba19d888a6e50d605672aed3140de"
"checksum chrono 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "7c20ebe0b2b08b0aeddba49c609fe7957ba2e33449882cb186a180bc60682fa9"
"checksum clap 2.25.0 (registry+https://github.com/rust-lang/crates.io-index)" = "867a885995b4184be051b70a592d4d70e32d7a188db6e8dff626af286a962771"
"checksum constant_time_eq 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)" = "07dcb7959f0f6f1cf662f9a7ff389bcb919924d99ac41cf31f10d611d8721323"
"checksum crossbeam 0.2.10 (registry+https://github.com/rust-lang/crates.io-index)" = "0c5ea215664ca264da8a9d9c3be80d2eaf30923c259d03e870388eb927508f97"
"checksum filetime 0.1.10 (registry+https://github.com/rust-lang/crates.io-index)" = "5363ab8e4139b8568a6237db5248646e5a8a2f89bd5ccb02092182b11fd3e922"
"checksum fuse 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)" = "5087262ce5b36fed6ccd4abf0a8224e48d055a2bb07fecb5605765de6f114a28"
"checksum clap 2.31.1 (registry+https://github.com/rust-lang/crates.io-index)" = "5dc18f6f4005132120d9711636b32c46a233fad94df6217fa1d81c5e97a9f200"
"checksum constant_time_eq 0.1.3 (registry+https://github.com/rust-lang/crates.io-index)" = "8ff012e225ce166d4422e0e78419d901719760f62ae2b7969ca6b564d1b54a9e"
"checksum crossbeam 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)" = "24ce9782d4d5c53674646a6a4c1863a21a8fc0cb649b3c94dfc16e45071dea19"
"checksum filetime 0.1.15 (registry+https://github.com/rust-lang/crates.io-index)" = "714653f3e34871534de23771ac7b26e999651a0a228f47beb324dfdf1dd4b10f"
"checksum fuchsia-zircon 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)" = "2e9763c69ebaae630ba35f74888db465e49e259ba1bc0eda7d06f4a067615d82"
"checksum fuchsia-zircon-sys 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)" = "3dcaa9ae7725d12cdb85b3ad99a434db70b468c09ded17e012d86b5c1010f7a7"
"checksum fuse 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)" = "80e57070510966bfef93662a81cb8aa2b1c7db0964354fa9921434f04b9e8660"
"checksum kernel32-sys 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)" = "7507624b29483431c0ba2d82aece8ca6cdba9382bff4ddd0f7490560c056098d"
"checksum lazy_static 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)" = "3b37545ab726dd833ec6420aaba8231c5b320814b9029ad585555d2a03e94fbf"
"checksum lazy_static 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)" = "76f033c7ad61445c5b347c7382dd1237847eb1bce590fe50365dcb33d546be73"
"checksum lazy_static 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "c8f31047daa365f19be14b47c29df4f7c3b581832407daabe6ae77397619237d"
"checksum libc 0.1.12 (registry+https://github.com/rust-lang/crates.io-index)" = "e32a70cf75e5846d53a673923498228bbec6a8624708a9ea5645f075d6276122"
"checksum libc 0.2.26 (registry+https://github.com/rust-lang/crates.io-index)" = "30885bcb161cf67054244d10d4a7f4835ffd58773bc72e07d35fecf472295503"
"checksum libsodium-sys 0.0.15 (registry+https://github.com/rust-lang/crates.io-index)" = "45e6d6bd0f1b72068272e1689693e3218f192221fae7a2046081f60035540df8"
"checksum linked-hash-map 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)" = "6d262045c5b87c0861b3f004610afd0e2c851e2908d08b6c870cbb9d5f494ecd"
"checksum linked-hash-map 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)" = "7860ec297f7008ff7a1e3382d7f7e1dcd69efc94751a2284bafc3d013c2aa939"
"checksum log 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)" = "880f77541efa6e5cc74e76910c9884d9859683118839d6a1dc3b11e63512565b"
"checksum memchr 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)" = "1dbccc0e46f1ea47b9f17e6d67c5a96bd27030519c519c9c91327e31275a47b4"
"checksum libc 0.2.39 (registry+https://github.com/rust-lang/crates.io-index)" = "f54263ad99207254cf58b5f701ecb432c717445ea2ee8af387334bdd1a03fdff"
"checksum libsodium-sys 0.0.16 (registry+https://github.com/rust-lang/crates.io-index)" = "fcbd1beeed8d44caa8a669ebaa697c313976e242c03cc9fb23d88bf1656f5542"
"checksum linked-hash-map 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)" = "70fb39025bc7cdd76305867c4eccf2f2dcf6e9a57f5b21a93e1c2d86cd03ec9e"
"checksum locale_config 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)" = "14fbee0e39bc2dd6a2427c4fdea66e9826cc1fd09b0a0b7550359f5f6efe1dab"
"checksum log 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)" = "e19e8d5c34a3e0e2223db8e060f9e8264aeeb5c5fc64a4ee9965c062211c024b"
"checksum log 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)" = "89f010e843f2b1a31dbd316b3b8d443758bc634bed37aabade59c686d644e0a2"
"checksum memchr 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)" = "796fba70e76612589ed2ce7f45282f5af869e0fdd7cc6199fa1aa1f1d591ba9d"
"checksum mmap 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "0bc85448a6006dd2ba26a385a564a8a0f1f2c7e78c70f1a70b2e0f4af286b823"
"checksum murmurhash3 0.0.5 (registry+https://github.com/rust-lang/crates.io-index)" = "a2983372caf4480544083767bf2d27defafe32af49ab4df3a0b7fc90793a3664"
"checksum num 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)" = "a311b77ebdc5dd4cf6449d81e4135d9f0e3b153839ac90e648a8ef538f923525"
"checksum num-integer 0.1.35 (registry+https://github.com/rust-lang/crates.io-index)" = "d1452e8b06e448a07f0e6ebb0bb1d92b8890eea63288c0b627331d53514d0fba"
"checksum num-iter 0.1.34 (registry+https://github.com/rust-lang/crates.io-index)" = "7485fcc84f85b4ecd0ea527b14189281cf27d60e583ae65ebc9c088b13dffe01"
"checksum num-traits 0.1.40 (registry+https://github.com/rust-lang/crates.io-index)" = "99843c856d68d8b4313b03a17e33c4bb42ae8f6610ea81b28abe076ac721b9b0"
"checksum nodrop 0.1.12 (registry+https://github.com/rust-lang/crates.io-index)" = "9a2228dca57108069a5262f2ed8bd2e82496d2e074a06d1ccc7ce1687b6ae0a2"
"checksum num 0.1.42 (registry+https://github.com/rust-lang/crates.io-index)" = "4703ad64153382334aa8db57c637364c322d3372e097840c72000dabdcf6156e"
"checksum num-integer 0.1.36 (registry+https://github.com/rust-lang/crates.io-index)" = "f8d26da319fb45674985c78f1d1caf99aa4941f785d384a2ae36d0740bc3e2fe"
"checksum num-iter 0.1.35 (registry+https://github.com/rust-lang/crates.io-index)" = "4b226df12c5a59b63569dd57fafb926d91b385dfce33d8074a412411b689d593"
"checksum num-traits 0.1.43 (registry+https://github.com/rust-lang/crates.io-index)" = "92e5113e9fd4cc14ded8e499429f396a20f98c772a47cc8622a736e1ec843c31"
"checksum num-traits 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)" = "0b3c2bd9b9d21e48e956b763c9f37134dc62d9e95da6edb3f672cacb6caf3cd3"
"checksum pbr 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "e048e3afebb6c454bb1c5d0fe73fda54698b4715d78ed8e7302447c37736d23a"
"checksum pkg-config 0.3.9 (registry+https://github.com/rust-lang/crates.io-index)" = "3a8b4c6b8165cd1a1cd4b9b120978131389f64bdaf456435caa41e630edba903"
"checksum quick-error 1.2.0 (registry+https://github.com/rust-lang/crates.io-index)" = "3c36987d4978eb1be2e422b1e0423a557923a5c3e7e6f31d5699e9aafaefa469"
"checksum rand 0.3.15 (registry+https://github.com/rust-lang/crates.io-index)" = "022e0636ec2519ddae48154b028864bdce4eaf7d35226ab8e65c611be97b189d"
"checksum redox_syscall 0.1.26 (registry+https://github.com/rust-lang/crates.io-index)" = "9df6a71a1e67be2104410736b2389fb8e383c1d7e9e792d629ff13c02867147a"
"checksum regex 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)" = "1731164734096285ec2a5ec7fea5248ae2f5485b3feeb0115af4fda2183b2d1b"
"checksum regex-syntax 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)" = "ad890a5eef7953f55427c50575c680c42841653abd2b028b68cd223d157f62db"
"checksum rmp 0.8.6 (registry+https://github.com/rust-lang/crates.io-index)" = "7ce560a5728f4eec697f07f8d7fa20608893d44b4f5b8f9f5f51a2987f3cffe2"
"checksum rmp-serde 0.13.4 (registry+https://github.com/rust-lang/crates.io-index)" = "b71335ea5c6ade501d5043f8138e88c4f0fac0466b730cea73b43fb2a1287ca9"
"checksum serde 1.0.10 (registry+https://github.com/rust-lang/crates.io-index)" = "433d7d9f8530d5a939ad5e0e72a6243d2e42a24804f70bf592c679363dcacb2f"
"checksum serde_bytes 0.10.1 (registry+https://github.com/rust-lang/crates.io-index)" = "12b8ae62bf2de9844de7506deb95667943b156ac18136a5c8124cb2ac0c51e19"
"checksum quick-error 1.2.1 (registry+https://github.com/rust-lang/crates.io-index)" = "eda5fe9b71976e62bc81b781206aaa076401769b2143379d3eb2118388babac4"
"checksum rand 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)" = "eba5f8cb59cc50ed56be8880a5c7b496bfd9bd26394e176bc67884094145c2c5"
"checksum redox_syscall 0.1.37 (registry+https://github.com/rust-lang/crates.io-index)" = "0d92eecebad22b767915e4d529f89f28ee96dbbf5a4810d2b844373f136417fd"
"checksum redox_termios 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "7e891cfe48e9100a70a3b6eb652fef28920c117d366339687bd5576160db0f76"
"checksum regex 0.2.6 (registry+https://github.com/rust-lang/crates.io-index)" = "5be5347bde0c48cfd8c3fdc0766cdfe9d8a755ef84d620d6794c778c91de8b2b"
"checksum regex-syntax 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)" = "8e931c58b93d86f080c734bfd2bce7dd0079ae2331235818133c8be7f422e20e"
"checksum remove_dir_all 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)" = "b5d2f806b0fcdabd98acd380dc8daef485e22bcb7cddc811d1337967f2528cf5"
"checksum rmp 0.8.7 (registry+https://github.com/rust-lang/crates.io-index)" = "a3d45d7afc9b132b34a2479648863aa95c5c88e98b32285326a6ebadc80ec5c9"
"checksum rmp-serde 0.13.7 (registry+https://github.com/rust-lang/crates.io-index)" = "011e1d58446e9fa3af7cdc1fb91295b10621d3ac4cb3a85cc86385ee9ca50cd3"
"checksum runtime-fmt 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)" = "647a821d66049faccc993fc3c379d1181b81a484097495cda79ffdb17b55b87f"
"checksum serde 1.0.27 (registry+https://github.com/rust-lang/crates.io-index)" = "db99f3919e20faa51bb2996057f5031d8685019b5a06139b1ce761da671b8526"
"checksum serde_bytes 0.10.3 (registry+https://github.com/rust-lang/crates.io-index)" = "52b678af90a3aebc4484c22d639bf374eb7d598988edb33fa73c4febd6046a59"
"checksum serde_utils 0.6.0 (registry+https://github.com/rust-lang/crates.io-index)" = "f6e0edb364c93646633800df969086bc7c5c25fb3f1eb57349990d1cb4cae4bc"
"checksum serde_yaml 0.7.1 (registry+https://github.com/rust-lang/crates.io-index)" = "49d983aa39d2884a4b422bb11bb38f4f48fa05186e17469bc31e47d01e381111"
"checksum sodiumoxide 0.0.15 (registry+https://github.com/rust-lang/crates.io-index)" = "769317362b6ba15fe135147c9ea97dc773c76a312a5b91282dea27bcfed3596c"
"checksum serde_yaml 0.7.3 (registry+https://github.com/rust-lang/crates.io-index)" = "e0f868d400d9d13d00988da49f7f02aeac6ef00f11901a8c535bd59d777b9e19"
"checksum sodiumoxide 0.0.16 (registry+https://github.com/rust-lang/crates.io-index)" = "eb5cb2f14f9a51352ad65e59257a0a9459d5a36a3615f3d53a974c82fdaaa00a"
"checksum squash-sys 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)" = "db1f9dde91d819b7746e153bc32489fa19e6a106c3d7f2b92187a4efbdc88b40"
"checksum strsim 0.6.0 (registry+https://github.com/rust-lang/crates.io-index)" = "b4d15c810519a91cf877e7e36e63fe068815c678181439f2f29e2562147c3694"
"checksum tar 0.4.13 (registry+https://github.com/rust-lang/crates.io-index)" = "281285b717926caa919ad905ef89c63d75805c7d89437fb873100925a53f2b1b"
"checksum tempdir 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)" = "87974a6f5c1dfb344d733055601650059a3363de2a6104819293baff662132d6"
"checksum term_size 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)" = "e2b6b55df3198cc93372e85dd2ed817f0e38ce8cc0f22eb32391bfad9c4bf209"
"checksum textwrap 0.6.0 (registry+https://github.com/rust-lang/crates.io-index)" = "f86300c3e7416ee233abd7cda890c492007a3980f941f79185c753a701257167"
"checksum strsim 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)" = "bb4f380125926a99e52bc279241539c018323fab05ad6368b56f93d9369ff550"
"checksum tar 0.4.14 (registry+https://github.com/rust-lang/crates.io-index)" = "1605d3388ceb50252952ffebab4b5dc43017ead7e4481b175961c283bb951195"
"checksum tempdir 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)" = "f73eebdb68c14bcb24aef74ea96079830e7fa7b31a6106e42ea7ee887c1e134e"
"checksum termion 1.5.1 (registry+https://github.com/rust-lang/crates.io-index)" = "689a3bdfaab439fd92bc87df5c4c78417d3cbe537487274e9b0b2dce76e92096"
"checksum textwrap 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)" = "c0b59b6b4b44d867f1370ef1bd91bfb262bf07bf0ae65c202ea2fbc16153b693"
"checksum thread-scoped 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)" = "bcbb6aa301e5d3b0b5ef639c9a9c7e2f1c944f177b460c04dc24c69b1fa2bd99"
"checksum thread_local 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)" = "1697c4b57aeeb7a536b647165a2825faddffb1d3bad386d507709bd51a90bb14"
"checksum time 0.1.38 (registry+https://github.com/rust-lang/crates.io-index)" = "d5d788d3aa77bc0ef3e9621256885555368b47bd495c13dd2e7413c89f845520"
"checksum unicode-segmentation 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "18127285758f0e2c6cf325bb3f3d138a12fee27de4f23e146cd6a179f26c2cf3"
"checksum thread_local 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)" = "279ef31c19ededf577bfd12dfae728040a21f635b06a24cd670ff510edd38963"
"checksum time 0.1.39 (registry+https://github.com/rust-lang/crates.io-index)" = "a15375f1df02096fb3317256ce2cee6a1f42fc84ea5ad5fc8c421cfe40c73098"
"checksum unicode-width 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)" = "bf3a113775714a22dcb774d8ea3655c53a32debae63a063acc00a91cc586245f"
"checksum unreachable 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "382810877fe448991dfc7f0dd6e3ae5d58088fd0ea5e35189655f84e6814fa56"
"checksum users 0.5.2 (registry+https://github.com/rust-lang/crates.io-index)" = "a7ae8fdf783cb9652109c99886459648feb92ecc749e6b8e7930f6decba74c7c"
"checksum users 0.6.0 (registry+https://github.com/rust-lang/crates.io-index)" = "99ab1b53affc9f75f57da4a8b051a188e84d20d43bea0dd9bd8db71eebbca6da"
"checksum utf8-ranges 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "662fab6525a98beff2921d7f61a39e7d59e0b425ebc7d0d9e66d316e55124122"
"checksum vec_map 0.8.0 (registry+https://github.com/rust-lang/crates.io-index)" = "887b5b631c2ad01628bbbaa7dd4c869f80d3186688f8d0b6f58774fbe324988c"
"checksum void 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)" = "6a02e4885ed3bc0f2de90ea6dd45ebcbb66dacffe03547fadbb0eeae2770887d"
"checksum winapi 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)" = "167dc9d6949a9b857f3451275e911c3f44255842c1f7a76f33c55103a909087a"
"checksum winapi 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)" = "04e3bd221fcbe8a271359c04f21a76db7d0c6028862d1bb5512d85e1e2eb5bb3"
"checksum winapi-build 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "2d315eee3b34aca4797b2da6b13ed88266e6d612562a0c46390af8299fc699bc"
"checksum winapi-i686-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
"checksum winapi-x86_64-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
"checksum xattr 0.1.11 (registry+https://github.com/rust-lang/crates.io-index)" = "5f04de8a1346489a2f9e9bd8526b73d135ec554227b17568456e86aa35b6f3fc"
"checksum xattr 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)" = "f20ed92d3af1dcee2ab0b8f167c2ce4865e5a4fa174656c9432d77bda446e11d"
"checksum yaml-rust 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)" = "e66366e18dc58b46801afbf2ca7661a9f59cc8c5962c29892b6039b4f86fa992"
"checksum xattr 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)" = "abb373b92de38a4301d66bec009929b4fb83120ea1c4a401be89dbe0b9777443"
"checksum yaml-rust 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "57ab38ee1a4a266ed033496cf9af1828d8d6e6c1cfa5f643a2809effcae4d628"

View File

@ -1,9 +1,12 @@
[package]
name = "zvault"
version = "0.4.0"
version = "0.5.0"
authors = ["Dennis Schwerdel <schwerdel@googlemail.com>"]
description = "Deduplicating backup tool"
[profile.release]
lto = true
[dependencies]
serde = "1.0"
rmp-serde = "0.13"
@ -16,28 +19,26 @@ blake2-rfc = "0.2"
murmurhash3 = "0.0.5"
chrono = "0.4"
clap = "^2.24"
log = "0.3"
log = "0.4"
byteorder = "1.0"
ansi_term = "0.9"
sodiumoxide = "0.0.15"
libsodium-sys = "0.0.15"
ansi_term = "0.11"
sodiumoxide = "0.0.16"
libsodium-sys = "0.0.16"
filetime = "0.1"
regex = "0.2"
fuse = "0.3"
lazy_static = "0.2"
rand = "0.3"
lazy_static = "1.0"
rand = "0.4"
tar = "0.4"
xattr = "0.2"
crossbeam = "0.2"
crossbeam = "0.3"
pbr = "1.0"
users = "0.5"
users = "0.6"
time = "*"
libc = "0.2"
index = {path="index"}
chunking = {path="chunking"}
[build-dependencies]
pkg-config = "0.3"
runtime-fmt = "0.3"
locale_config = "^0.2.2"
mmap = "0.1"
[features]
default = []

View File

@ -1,7 +1,7 @@
# License: GPL-3
zVault - Deduplicating backup solution
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by

View File

@ -1,4 +1,8 @@
# zVault Backup Solution
[![Build Status](https://travis-ci.org/dswd/zvault.svg?branch=master)](https://travis-ci.org/dswd/zvault)
[![Coverage Status](https://coveralls.io/repos/dswd/zvault/badge.svg?branch=master&service=github)](https://coveralls.io/github/dswd/zvault?branch=master)
zVault is a highly efficient deduplicating backup solution that supports
client-side encryption, compression and remote storage of backup data.

14
chunking/Cargo.lock generated
View File

@ -1,14 +0,0 @@
[root]
name = "chunking"
version = "0.1.0"
dependencies = [
"quick-error 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "quick-error"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[metadata]
"checksum quick-error 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "0aad603e8d7fb67da22dbdf1f4b826ce8829e406124109e73cf1b2454b93a71c"

View File

@ -1,7 +0,0 @@
[package]
name = "chunking"
version = "0.1.0"
authors = ["Dennis Schwerdel <schwerdel@googlemail.com>"]
[dependencies]
quick-error = "1.1"

View File

@ -1,119 +0,0 @@
use super::*;
use std::ptr;
// FastCDC
// Paper: "FastCDC: a Fast and Efficient Content-Defined Chunking Approach for Data Deduplication"
// Paper-URL: https://www.usenix.org/system/files/conference/atc16/atc16-paper-xia.pdf
// Presentation: https://www.usenix.org/sites/default/files/conference/protected-files/atc16_slides_xia.pdf
// Creating 256 pseudo-random values (based on Knuth's MMIX)
fn create_gear(seed: u64) -> [u64; 256] {
let mut table = [0u64; 256];
let a = 6364136223846793005;
let c = 1442695040888963407;
let mut v = seed;
for t in &mut table.iter_mut() {
v = v.wrapping_mul(a).wrapping_add(c);
*t = v;
}
table
}
fn get_masks(avg_size: usize, nc_level: usize, seed: u64) -> (u64, u64) {
let bits = (avg_size.next_power_of_two() - 1).count_ones();
if bits == 13 {
// From the paper
return (0x0003590703530000, 0x0000d90003530000);
}
let mut mask = 0u64;
let mut v = seed;
let a = 6364136223846793005;
let c = 1442695040888963407;
while mask.count_ones() < bits - nc_level as u32 {
v = v.wrapping_mul(a).wrapping_add(c);
mask = (mask | 1).rotate_left(v as u32 & 0x3f);
}
let mask_long = mask;
while mask.count_ones() < bits + nc_level as u32 {
v = v.wrapping_mul(a).wrapping_add(c);
mask = (mask | 1).rotate_left(v as u32 & 0x3f);
}
let mask_short = mask;
(mask_short, mask_long)
}
pub struct FastCdcChunker {
buffer: [u8; 4096],
buffered: usize,
gear: [u64; 256],
min_size: usize,
max_size: usize,
avg_size: usize,
mask_long: u64,
mask_short: u64,
}
impl FastCdcChunker {
pub fn new(avg_size: usize, seed: u64) -> Self {
let (mask_short, mask_long) = get_masks(avg_size, 2, seed);
FastCdcChunker {
buffer: [0; 4096],
buffered: 0,
gear: create_gear(seed),
min_size: avg_size/4,
max_size: avg_size*8,
avg_size: avg_size,
mask_long: mask_long,
mask_short: mask_short,
}
}
}
impl Chunker for FastCdcChunker {
#[allow(unknown_lints,explicit_counter_loop,needless_range_loop)]
fn chunk(&mut self, r: &mut Read, mut w: &mut Write) -> Result<ChunkerStatus, ChunkerError> {
let mut max;
let mut hash = 0u64;
let mut pos = 0;
let gear = &self.gear;
let buffer = &mut self.buffer;
let min_size = self.min_size;
let mask_short = self.mask_short;
let mask_long = self.mask_long;
let avg_size = self.avg_size;
let max_size = self.max_size;
loop {
// Fill the buffer, there might be some bytes still in there from last chunk
max = try!(r.read(&mut buffer[self.buffered..]).map_err(ChunkerError::Read)) + self.buffered;
// If nothing to do, finish
if max == 0 {
return Ok(ChunkerStatus::Finished)
}
for i in 0..max {
if pos >= min_size {
// Hash update
hash = (hash << 1).wrapping_add(gear[buffer[i] as usize]);
// 3 options for break point
// 1) mask_short matches and chunk is smaller than average
// 2) mask_long matches and chunk is longer or equal to average
// 3) chunk reached max_size
if pos < avg_size && hash & mask_short == 0
|| pos >= avg_size && hash & mask_long == 0
|| pos >= max_size {
// Write all bytes from this chunk out to sink and store rest for next chunk
try!(w.write_all(&buffer[..i+1]).map_err(ChunkerError::Write));
unsafe { ptr::copy(buffer[i+1..].as_ptr(), buffer.as_mut_ptr(), max-i-1) };
self.buffered = max-i-1;
return Ok(ChunkerStatus::Continue);
}
}
pos += 1;
}
try!(w.write_all(&buffer[..max]).map_err(ChunkerError::Write));
self.buffered = 0;
}
}
}

View File

@ -25,6 +25,7 @@ $(PACKAGE)/man/*: ../docs/man/*
$(PACKAGE)/zvault: ../target/release/zvault
cp ../target/release/zvault $(PACKAGE)/zvault
strip -s $(PACKAGE)/zvault
../target/release/zvault: ../src/*.rs ../Cargo.toml
(cd ..; cargo build --release)

View File

@ -1,269 +1,396 @@
++ rm -rf repos
++ mkdir repos
++ target/release/zvault init --compression brotli/3 repos/zvault_brotli3
real 0m0.003s
user 0m0.000s
sys 0m0.000s
++ target/release/zvault init --compression brotli/6 repos/zvault_brotli6
real 0m0.004s
user 0m0.000s
sys 0m0.000s
++ target/release/zvault init --compression lzma2/2 repos/zvault_lzma2
real 0m0.004s
user 0m0.000s
sys 0m0.000s
++ mkdir -p repos/remotes/zvault_brotli3 repos/remotes/zvault_brotli6 repos/remotes/zvault_lzma2
+++ pwd
+++ pwd
++ target/release/zvault init --compression brotli/3 --remote /home/dschwerdel/shared/projekte/zvault.rs/repos/remotes/zvault_brotli3 /home/dschwerdel/shared/projekte/zvault.rs/repos/zvault_brotli3
Bundle size: 25.0 MiB
Chunker: fastcdc/16
Compression: brotli/3
Encryption: none
Hash method: blake2
+++ pwd
+++ pwd
++ target/release/zvault init --compression brotli/6 --remote /home/dschwerdel/shared/projekte/zvault.rs/repos/remotes/zvault_brotli6 /home/dschwerdel/shared/projekte/zvault.rs/repos/zvault_brotli6
Bundle size: 25.0 MiB
Chunker: fastcdc/16
Compression: brotli/6
Encryption: none
Hash method: blake2
+++ pwd
+++ pwd
++ target/release/zvault init --compression lzma2/2 --remote /home/dschwerdel/shared/projekte/zvault.rs/repos/remotes/zvault_lzma2 /home/dschwerdel/shared/projekte/zvault.rs/repos/zvault_lzma2
Bundle size: 25.0 MiB
Chunker: fastcdc/16
Compression: lzma/2
Encryption: none
Hash method: blake2
++ attic init repos/attic
Initializing repository at "repos/attic"
Encryption NOT enabled.
Use the "--encryption=passphrase|keyfile" to enable encryption.
Initializing cache...
real 0m0.147s
user 0m0.116s
sys 0m0.012s
++ borg init -e none repos/borg
real 0m0.403s
user 0m0.336s
sys 0m0.048s
++ borg init -e none repos/borg-zlib
real 0m0.338s
user 0m0.292s
sys 0m0.024s
++ zbackup init --non-encrypted repos/zbackup
++ find test_data/silesia -type f
++ xargs cat
+++ pwd
++ target/release/zvault backup test_data/silesia /home/dschwerdel/shared/projekte/zvault.rs/repos/zvault_brotli3::silesia1
info: No reference backup found, doing a full scan instead
info: Backup finished
Date: Sun, 4 Mar 2018 16:44:37 +0100
Source: lap-it-032:test_data/silesia
Duration: 0:00:04.0
Entries: 12 files, 1 dirs
Total backup size: 202.3 MiB
Modified data size: 202.3 MiB
Deduplicated size: 202.3 MiB, -0.0%
Compressed size: 64.5 MiB in 4 bundles, -68.1%
Chunk count: 11017, avg size: 18.8 KiB
real 0m0.009s
user 0m0.000s
sys 0m0.000s
++ cat
++ target/release/zvault backup repos/zvault_brotli3::silesia1 test_data/silesia.tar
WARN - Partial backups are not implemented yet, creating full backup
real 0m4.049s
user 0m3.714s
sys 0m0.504s
+++ pwd
++ target/release/zvault backup test_data/silesia /home/dschwerdel/shared/projekte/zvault.rs/repos/zvault_brotli3::silesia2
info: Using backup silesia1 as reference
info: Backup finished
Date: Sun, 4 Mar 2018 16:44:41 +0100
Source: lap-it-032:test_data/silesia
Duration: 0:00:00.0
Entries: 12 files, 1 dirs
Total backup size: 202.3 MiB
Modified data size: 0 Byte
Deduplicated size: 0 Byte, NaN%
Compressed size: 0 Byte in 0 bundles, NaN%
Chunk count: 0, avg size: 0 Byte
real 0m6.034s
user 0m5.508s
sys 0m0.424s
++ target/release/zvault backup repos/zvault_brotli3::silesia2 test_data/silesia.tar
WARN - Partial backups are not implemented yet, creating full backup
real 0m0.009s
user 0m0.004s
sys 0m0.004s
+++ pwd
++ target/release/zvault backup test_data/silesia /home/dschwerdel/shared/projekte/zvault.rs/repos/zvault_brotli6::silesia1
info: No reference backup found, doing a full scan instead
info: Backup finished
Date: Sun, 4 Mar 2018 16:44:41 +0100
Source: lap-it-032:test_data/silesia
Duration: 0:00:16.1
Entries: 12 files, 1 dirs
Total backup size: 202.3 MiB
Modified data size: 202.3 MiB
Deduplicated size: 202.3 MiB, -0.0%
Compressed size: 56.9 MiB in 4 bundles, -71.9%
Chunk count: 11017, avg size: 18.8 KiB
real 0m1.425s
user 0m1.348s
sys 0m0.076s
++ target/release/zvault backup repos/zvault_brotli6::silesia1 test_data/silesia.tar
WARN - Partial backups are not implemented yet, creating full backup
real 0m16.100s
user 0m15.441s
sys 0m0.833s
+++ pwd
++ target/release/zvault backup test_data/silesia /home/dschwerdel/shared/projekte/zvault.rs/repos/zvault_brotli6::silesia2
info: Using backup silesia1 as reference
info: Backup finished
Date: Sun, 4 Mar 2018 16:44:57 +0100
Source: lap-it-032:test_data/silesia
Duration: 0:00:00.0
Entries: 12 files, 1 dirs
Total backup size: 202.3 MiB
Modified data size: 0 Byte
Deduplicated size: 0 Byte, NaN%
Compressed size: 0 Byte in 0 bundles, NaN%
Chunk count: 0, avg size: 0 Byte
real 0m23.035s
user 0m22.156s
sys 0m0.692s
++ target/release/zvault backup repos/zvault_brotli6::silesia2 test_data/silesia.tar
WARN - Partial backups are not implemented yet, creating full backup
real 0m0.008s
user 0m0.000s
sys 0m0.008s
+++ pwd
++ target/release/zvault backup test_data/silesia /home/dschwerdel/shared/projekte/zvault.rs/repos/zvault_lzma2::silesia1
info: No reference backup found, doing a full scan instead
info: Backup finished
Date: Sun, 4 Mar 2018 16:44:57 +0100
Source: lap-it-032:test_data/silesia
Duration: 0:00:45.1
Entries: 12 files, 1 dirs
Total backup size: 202.3 MiB
Modified data size: 202.3 MiB
Deduplicated size: 202.3 MiB, -0.0%
Compressed size: 53.9 MiB in 4 bundles, -73.3%
Chunk count: 11017, avg size: 18.8 KiB
real 0m1.150s
user 0m1.120s
sys 0m0.024s
++ target/release/zvault backup repos/zvault_lzma2::silesia1 test_data/silesia.tar
WARN - Partial backups are not implemented yet, creating full backup
real 0m45.068s
user 0m44.571s
sys 0m0.628s
+++ pwd
++ target/release/zvault backup test_data/silesia /home/dschwerdel/shared/projekte/zvault.rs/repos/zvault_lzma2::silesia2
info: Using backup silesia1 as reference
info: Backup finished
Date: Sun, 4 Mar 2018 16:45:42 +0100
Source: lap-it-032:test_data/silesia
Duration: 0:00:00.0
Entries: 12 files, 1 dirs
Total backup size: 202.3 MiB
Modified data size: 0 Byte
Deduplicated size: 0 Byte, NaN%
Compressed size: 0 Byte in 0 bundles, NaN%
Chunk count: 0, avg size: 0 Byte
real 0m54.011s
user 0m53.044s
sys 0m0.728s
++ target/release/zvault backup repos/zvault_lzma2::silesia2 test_data/silesia.tar
WARN - Partial backups are not implemented yet, creating full backup
real 0m0.030s
user 0m0.019s
sys 0m0.011s
++ attic create repos/attic::silesia1 test_data/silesia
real 0m1.157s
user 0m1.108s
sys 0m0.040s
++ attic create repos/attic::silesia1 test_data/silesia.tar
real 0m12.686s
user 0m11.810s
sys 0m0.373s
++ attic create repos/attic::silesia2 test_data/silesia
real 0m13.427s
user 0m12.256s
sys 0m0.476s
++ attic create repos/attic::silesia2 test_data/silesia.tar
real 0m0.265s
user 0m0.185s
sys 0m0.047s
++ borg create -C none repos/borg::silesia1 test_data/silesia
real 0m1.930s
user 0m1.804s
sys 0m0.092s
++ borg create -C none repos/borg::silesia1 test_data/silesia.tar
real 0m4.206s
user 0m2.139s
sys 0m0.870s
++ borg create -C none repos/borg::silesia2 test_data/silesia
real 0m5.246s
user 0m2.516s
sys 0m1.132s
++ borg create -C none repos/borg::silesia2 test_data/silesia.tar
real 0m0.455s
user 0m0.357s
sys 0m0.071s
++ borg create -C zlib repos/borg-zlib::silesia1 test_data/silesia
real 0m3.029s
user 0m2.408s
sys 0m0.428s
++ borg create -C zlib repos/borg-zlib::silesia1 test_data/silesia.tar
real 0m13.184s
user 0m12.293s
sys 0m0.500s
++ borg create -C zlib repos/borg-zlib::silesia2 test_data/silesia
real 0m14.833s
user 0m13.524s
sys 0m0.692s
++ borg create -C zlib repos/borg-zlib::silesia2 test_data/silesia.tar
real 0m2.413s
user 0m1.996s
sys 0m0.368s
real 0m0.416s
user 0m0.335s
sys 0m0.059s
++ tar -c test_data/silesia
++ zbackup backup --non-encrypted repos/zbackup/backups/silesia1
Loading index...
Index loaded.
Using up to 4 thread(s) for compression
real 0m52.613s
user 3m12.460s
sys 0m2.568s
real 0m52.286s
user 2m52.262s
sys 0m3.453s
++ tar -c test_data/silesia
++ zbackup backup --non-encrypted repos/zbackup/backups/silesia2
Loading index...
Loading index file 1e374b3c9ce07b4d9ad4238e35e5834c07d3a4ca984bb842...
Loading index file 6ff054dcc4af8c472a5fbd661a8f61409e44a4fafc287d4d...
Index loaded.
Using up to 4 thread(s) for compression
real 0m2.141s
user 0m2.072s
sys 0m0.064s
real 0m1.983s
user 0m1.844s
sys 0m0.315s
++ du -h test_data/silesia.tar
203M test_data/silesia.tar
++ du -sh repos/zvault_brotli3/bundles repos/zvault_brotli6/bundles repos/zvault_lzma2/bundles repos/attic repos/borg repos/borg-zlib repos/zbackup
66M repos/zvault_brotli3/bundles
58M repos/zvault_brotli6/bundles
55M repos/zvault_lzma2/bundles
68M repos/attic
203M repos/borg
66M repos/borg-zlib
52M repos/zbackup
203M test_data/silesia.tar
++ du -sh repos/remotes/zvault_brotli3 repos/remotes/zvault_brotli6 repos/remotes/zvault_lzma2 repos/attic repos/borg repos/borg-zlib repos/zbackup
65M repos/remotes/zvault_brotli3
58M repos/remotes/zvault_brotli6
55M repos/remotes/zvault_lzma2
68M repos/attic
203M repos/borg
66M repos/borg-zlib
52M repos/zbackup
++ rm -rf repos
++ mkdir repos
++ target/release/zvault init --compression brotli/3 repos/zvault_brotli3
real 0m0.004s
user 0m0.000s
sys 0m0.000s
++ target/release/zvault init --compression brotli/6 repos/zvault_brotli6
real 0m0.003s
user 0m0.000s
sys 0m0.000s
++ target/release/zvault init --compression lzma2/2 repos/zvault_lzma2
real 0m0.003s
user 0m0.000s
sys 0m0.000s
++ mkdir -p repos/remotes/zvault_brotli3 repos/remotes/zvault_brotli6 repos/remotes/zvault_lzma2
+++ pwd
+++ pwd
++ target/release/zvault init --compression brotli/3 --remote /home/dschwerdel/shared/projekte/zvault.rs/repos/remotes/zvault_brotli3 /home/dschwerdel/shared/projekte/zvault.rs/repos/zvault_brotli3
Bundle size: 25.0 MiB
Chunker: fastcdc/16
Compression: brotli/3
Encryption: none
Hash method: blake2
+++ pwd
+++ pwd
++ target/release/zvault init --compression brotli/6 --remote /home/dschwerdel/shared/projekte/zvault.rs/repos/remotes/zvault_brotli6 /home/dschwerdel/shared/projekte/zvault.rs/repos/zvault_brotli6
Bundle size: 25.0 MiB
Chunker: fastcdc/16
Compression: brotli/6
Encryption: none
Hash method: blake2
+++ pwd
+++ pwd
++ target/release/zvault init --compression lzma2/2 --remote /home/dschwerdel/shared/projekte/zvault.rs/repos/remotes/zvault_lzma2 /home/dschwerdel/shared/projekte/zvault.rs/repos/zvault_lzma2
Bundle size: 25.0 MiB
Chunker: fastcdc/16
Compression: lzma/2
Encryption: none
Hash method: blake2
++ attic init repos/attic
Initializing repository at "repos/attic"
Encryption NOT enabled.
Use the "--encryption=passphrase|keyfile" to enable encryption.
Initializing cache...
real 0m0.169s
user 0m0.136s
sys 0m0.012s
++ borg init -e none repos/borg
real 0m0.364s
user 0m0.320s
sys 0m0.020s
++ borg init -e none repos/borg-zlib
real 0m0.393s
user 0m0.352s
sys 0m0.020s
++ zbackup init --non-encrypted repos/zbackup
++ find test_data/ubuntu -type f
++ xargs cat
+++ pwd
++ target/release/zvault backup test_data/ubuntu /home/dschwerdel/shared/projekte/zvault.rs/repos/zvault_brotli3::ubuntu1
info: No reference backup found, doing a full scan instead
info: Backup finished
Date: Sun, 4 Mar 2018 16:47:09 +0100
Source: lap-it-032:test_data/ubuntu
Duration: 0:00:02.0
Entries: 4418 files, 670 dirs
Total backup size: 83.2 MiB
Modified data size: 83.2 MiB
Deduplicated size: 74.7 MiB, -10.2%
Compressed size: 29.6 MiB in 3 bundles, -60.3%
Chunk count: 12038, avg size: 6.4 KiB
real 0m0.003s
user 0m0.000s
sys 0m0.000s
++ cat
++ target/release/zvault backup repos/zvault_brotli3::ubuntu1 test_data/ubuntu.tar
WARN - Partial backups are not implemented yet, creating full backup
real 0m2.009s
user 0m1.718s
sys 0m0.369s
+++ pwd
++ target/release/zvault backup test_data/ubuntu /home/dschwerdel/shared/projekte/zvault.rs/repos/zvault_brotli3::ubuntu2
info: Using backup ubuntu1 as reference
info: Backup finished
Date: Sun, 4 Mar 2018 16:47:11 +0100
Source: lap-it-032:test_data/ubuntu
Duration: 0:00:00.1
Entries: 4418 files, 670 dirs
Total backup size: 83.2 MiB
Modified data size: 0 Byte
Deduplicated size: 0 Byte, NaN%
Compressed size: 0 Byte in 0 bundles, NaN%
Chunk count: 0, avg size: 0 Byte
real 0m5.496s
user 0m5.000s
sys 0m0.492s
++ target/release/zvault backup repos/zvault_brotli3::ubuntu2 test_data/ubuntu.tar
WARN - Partial backups are not implemented yet, creating full backup
real 0m0.112s
user 0m0.032s
sys 0m0.079s
+++ pwd
++ target/release/zvault backup test_data/ubuntu /home/dschwerdel/shared/projekte/zvault.rs/repos/zvault_brotli6::ubuntu1
info: No reference backup found, doing a full scan instead
info: Backup finished
Date: Sun, 4 Mar 2018 16:47:11 +0100
Source: lap-it-032:test_data/ubuntu
Duration: 0:00:07.6
Entries: 4418 files, 670 dirs
Total backup size: 83.2 MiB
Modified data size: 83.2 MiB
Deduplicated size: 74.7 MiB, -10.2%
Compressed size: 24.1 MiB in 2 bundles, -67.7%
Chunk count: 12038, avg size: 6.4 KiB
real 0m1.156s
user 0m1.104s
sys 0m0.048s
++ target/release/zvault backup repos/zvault_brotli6::ubuntu1 test_data/ubuntu.tar
WARN - Partial backups are not implemented yet, creating full backup
real 0m7.572s
user 0m7.156s
sys 0m0.424s
+++ pwd
++ target/release/zvault backup test_data/ubuntu /home/dschwerdel/shared/projekte/zvault.rs/repos/zvault_brotli6::ubuntu2
info: Using backup ubuntu1 as reference
info: Backup finished
Date: Sun, 4 Mar 2018 16:47:19 +0100
Source: lap-it-032:test_data/ubuntu
Duration: 0:00:00.1
Entries: 4418 files, 670 dirs
Total backup size: 83.2 MiB
Modified data size: 0 Byte
Deduplicated size: 0 Byte, NaN%
Compressed size: 0 Byte in 0 bundles, NaN%
Chunk count: 0, avg size: 0 Byte
real 0m21.012s
user 0m20.524s
sys 0m0.464s
++ target/release/zvault backup repos/zvault_brotli6::ubuntu2 test_data/ubuntu.tar
WARN - Partial backups are not implemented yet, creating full backup
real 0m0.127s
user 0m0.058s
sys 0m0.065s
+++ pwd
++ target/release/zvault backup test_data/ubuntu /home/dschwerdel/shared/projekte/zvault.rs/repos/zvault_lzma2::ubuntu1
info: No reference backup found, doing a full scan instead
info: Backup finished
Date: Sun, 4 Mar 2018 16:47:19 +0100
Source: lap-it-032:test_data/ubuntu
Duration: 0:00:17.6
Entries: 4418 files, 670 dirs
Total backup size: 83.2 MiB
Modified data size: 83.2 MiB
Deduplicated size: 74.7 MiB, -10.2%
Compressed size: 21.6 MiB in 2 bundles, -71.1%
Chunk count: 12038, avg size: 6.4 KiB
real 0m0.999s
user 0m0.964s
sys 0m0.032s
++ target/release/zvault backup repos/zvault_lzma2::ubuntu1 test_data/ubuntu.tar
WARN - Partial backups are not implemented yet, creating full backup
real 0m17.619s
user 0m17.223s
sys 0m0.376s
+++ pwd
++ target/release/zvault backup test_data/ubuntu /home/dschwerdel/shared/projekte/zvault.rs/repos/zvault_lzma2::ubuntu2
info: Using backup ubuntu1 as reference
info: Backup finished
Date: Sun, 4 Mar 2018 16:47:37 +0100
Source: lap-it-032:test_data/ubuntu
Duration: 0:00:00.1
Entries: 4418 files, 670 dirs
Total backup size: 83.2 MiB
Modified data size: 0 Byte
Deduplicated size: 0 Byte, NaN%
Compressed size: 0 Byte in 0 bundles, NaN%
Chunk count: 0, avg size: 0 Byte
real 0m55.683s
user 0m54.992s
sys 0m0.656s
++ target/release/zvault backup repos/zvault_lzma2::ubuntu2 test_data/ubuntu.tar
WARN - Partial backups are not implemented yet, creating full backup
real 0m0.136s
user 0m0.080s
sys 0m0.056s
++ attic create repos/attic::ubuntu1 test_data/ubuntu
real 0m0.995s
user 0m0.968s
sys 0m0.024s
++ attic create repos/attic::ubuntu1 test_data/ubuntu.tar
real 0m6.915s
user 0m6.175s
sys 0m0.503s
++ attic create repos/attic::ubuntu2 test_data/ubuntu
real 0m13.093s
user 0m11.880s
sys 0m0.512s
++ attic create repos/attic::ubuntu2 test_data/ubuntu.tar
real 0m0.554s
user 0m0.416s
sys 0m0.107s
++ borg create -C none repos/borg::ubuntu1 test_data/ubuntu
real 0m1.722s
user 0m1.620s
sys 0m0.072s
++ borg create -C none repos/borg::ubuntu1 test_data/ubuntu.tar
real 0m3.047s
user 0m1.872s
sys 0m0.576s
++ borg create -C none repos/borg::ubuntu2 test_data/ubuntu
real 0m4.551s
user 0m2.120s
sys 0m1.012s
++ borg create -C none repos/borg::ubuntu2 test_data/ubuntu.tar
real 0m0.929s
user 0m0.695s
sys 0m0.175s
++ borg create -C zlib repos/borg-zlib::ubuntu1 test_data/ubuntu
real 0m2.403s
user 0m1.996s
sys 0m0.308s
++ borg create -C zlib repos/borg-zlib::ubuntu1 test_data/ubuntu.tar
real 0m7.859s
user 0m7.100s
sys 0m0.484s
++ borg create -C zlib repos/borg-zlib::ubuntu2 test_data/ubuntu
real 0m14.114s
user 0m12.768s
sys 0m0.648s
++ borg create -C zlib repos/borg-zlib::ubuntu2 test_data/ubuntu.tar
real 0m2.091s
user 0m1.780s
sys 0m0.280s
real 0m0.955s
user 0m0.720s
sys 0m0.183s
++ tar -c test_data/ubuntu
++ zbackup backup --non-encrypted repos/zbackup/backups/ubuntu1
Loading index...
Index loaded.
Using up to 4 thread(s) for compression
real 0m38.218s
user 2m21.564s
sys 0m3.832s
real 0m17.229s
user 0m58.868s
sys 0m1.395s
++ zbackup backup --non-encrypted repos/zbackup/backups/ubuntu2
++ tar -c test_data/ubuntu
Loading index...
Loading index file 4f106a9d29c26e4132ae67e9528e1ed6f8579fe6ee6fd671...
Loading index file 6429a26e69a74bb1ae139efc7fb1446881a15d3c4170c9b5...
Index loaded.
Using up to 4 thread(s) for compression
real 0m1.755s
user 0m1.728s
sys 0m0.024s
real 0m1.033s
user 0m0.856s
sys 0m0.177s
++ du -h test_data/ubuntu.tar
176M test_data/ubuntu.tar
++ du -sh repos/zvault_brotli3/bundles repos/zvault_brotli6/bundles repos/zvault_lzma2/bundles repos/attic repos/borg repos/borg-zlib repos/zbackup
77M repos/zvault_brotli3/bundles
68M repos/zvault_brotli6/bundles
63M repos/zvault_lzma2/bundles
84M repos/attic
176M repos/borg
83M repos/borg-zlib
64M repos/zbackup
98M test_data/ubuntu.tar
++ du -sh repos/remotes/zvault_brotli3 repos/remotes/zvault_brotli6 repos/remotes/zvault_lzma2 repos/attic repos/borg repos/borg-zlib repos/zbackup
30M repos/remotes/zvault_brotli3
25M repos/remotes/zvault_brotli6
22M repos/remotes/zvault_lzma2
35M repos/attic
83M repos/borg
36M repos/borg-zlib
24M repos/zbackup

25
docs/comparison.md Normal file
View File

@ -0,0 +1,25 @@
## Silesia corpus
| Tool | 1st run | 2nd run | Repo Size |
| -------------- | -------:| -------:| ---------:|
| zvault/brotli3 | 4.0s | 0.0s | 65 MiB |
| zvault/brotli6 | 16.1s | 0.0s | 58 MiB |
| zvault/lzma2 | 45.1s | 0.0s | 55 MiB |
| attic | 12.7s | 0.3s | 68 MiB |
| borg | 4.2s | 0.5s | 203 MiB |
| borg/zlib | 13.2s | 0.4s | 66 MiB |
| zbackup | 52.3s | 2.0s | 52 MiB |
## Ubuntu 16.04 docker image
| Tool | 1st run | 2nd run | Repo Size |
| -------------- | -------:| -------:| ---------:|
| zvault/brotli3 | 2.0s | 0.1s | 30 MiB |
| zvault/brotli6 | 7.6s | 0.1s | 25 MiB |
| zvault/lzma2 | 17.6s | 0.1s | 22 MiB |
| attic | 6.9s | 0.6s | 35 MiB |
| borg | 3.0s | 0.9s | 83 MiB |
| borg/zlib | 7.9s | 1.0s | 36 MiB |
| zbackup | 17.2s | 1.0s | 24 MiB |

View File

@ -62,5 +62,5 @@ key will be set as default encryption key.
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -84,5 +84,5 @@ The options are exactly the same as for _zvault-init(1)_.
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -40,5 +40,5 @@ running _zvault-vacuum(1)_ with different ratios.
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -161,5 +161,5 @@ the case of directories) will be left out of the backup.
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -41,5 +41,5 @@ names on the remote storage that do not relate to the bundle id.
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -38,5 +38,5 @@ given its bundle id.
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -105,5 +105,5 @@ has become inaccessible.
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -88,5 +88,5 @@ data and can be changed at any time without any drawback.
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -39,5 +39,5 @@ If `repository` is omitted, the default repository location is used instead.
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -47,5 +47,5 @@ modified (_mod_).
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -41,5 +41,5 @@ writes it to the given file `FILE`.
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -55,5 +55,5 @@ imported via _zvault-backup(1)_ also with the `--tar` flag.
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -39,5 +39,5 @@ The repository, backup or backup subtree given by `PATH` must be in the format
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -92,5 +92,5 @@ configuration can be changed by _zvault-config(1)_ later.
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -50,5 +50,5 @@ filesystem which is faster than _zvault-list(1)_ for multiple listings.
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -52,5 +52,5 @@ this way is slower than using _zvault-restore(1)_.
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -104,5 +104,5 @@ data of the deleted backups becomes inaccessible and can not be restored.**
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -62,5 +62,5 @@ data of the deleted backups becomes inaccessible and can not be restored.**
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -54,5 +54,5 @@ If `--tar` is not set, the data will be written into the existing folder `DST`.
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -81,5 +81,5 @@ should be avoided when the storage space permits it.
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -42,5 +42,5 @@ earliest backup that version appeared in.
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -100,8 +100,8 @@ regarded as not set at all.
Examples:
- `~/.zvault` references the repository in `~/.zvault` and is identical with
`::`.
- `~/.zvault/repos/default` references the repository in
`~/.zvault/repos/default` and is identical with `::`.
- `::backup1` references the backup `backup1` in the default repository
- `::backup1::/` references the root folder of the backup `backup1` in the
default repository
@ -189,7 +189,7 @@ The chunker algortihm and chunk size are configured together in the format
`algorithm/size` where algorithm is one of `rabin`, `ae` and `fastcdc` and size
is the size in KiB e.g. `16`. So the recommended configuration is `fastcdc/16`.
Please not that since the chunker algorithm and chunk size affect the chunks
Please note that since the chunker algorithm and chunk size affect the chunks
created from the input data, any change to those values will make existing
chunks inaccessible for deduplication purposes. The old data is still readable
but new backups will have to store all data again.
@ -198,7 +198,7 @@ but new backups will have to store all data again.
### Compression
ZVault offers different compression algorithms that can be used to compress the
stored data after deduplication. The compression ratio that can be achieved
mostly depends on the input data (test data can be compressed well and media
mostly depends on the input data (text data can be compressed well and media
data like music and videos are already compressed and can not be compressed
significantly).
@ -341,5 +341,5 @@ To reclaim storage space after removing some backups vacuum needs to be run
## COPYRIGHT
Copyright (C) 2017 Dennis Schwerdel
Copyright (C) 2017-2018 Dennis Schwerdel
This software is licensed under GPL-3 or newer (see LICENSE.md)

View File

@ -1,4 +1,4 @@
# ZVault repository
# zVault repository
This folder is a zVault remote repository and contains backup data.
@ -181,11 +181,13 @@ The inode entries are encoded as defined in the appendix as `Inode`. The inode
structure contains all meta information on an inode entry, e.g. its file type,
the data size, modification time, permissions and ownership, etc. Also, the
structure contains optional information that is specific to the file type.
For regular files, the inode structure contains the data of that file either
inline (for very small files) or as a reference via a chunk list.
For directories, the inode structure contains a mapping of child inode entries
with their name as key and a chunk list referring their encoded `Inode`
structure as value.
For symlinks, the inode structure contains the target in the field
`symlink_target`.
@ -251,10 +253,12 @@ The `BundleMode` describes the contents of the chunks of a bundle.
- `Meta` means that the chunks either contain encoded chunk lists or encoded
inode metadata
BundleMode {
Data => 0,
Meta => 1
}
```
BundleMode {
Data => 0,
Meta => 1
}
```
#### `HashMethod`
@ -266,10 +270,12 @@ chunk data. This is not relevant for reading backups.
https://en.wikipedia.org/wiki/MurmurHash for the x64 architecture and with the
hash length set to 128 bits.
HashMethod {
Blake2 => 1,
Murmur3 => 2
}
```
HashMethod {
Blake2 => 1,
Murmur3 => 2
}
```
#### `EncryptionMethod`
@ -278,9 +284,11 @@ decrypt) data.
- `Sodium` means the `crypto_box_seal` method of `libsodium` as specified at
http://www.libsodium.org as a combination of `X25519` and `XSalsa20-Poly1305`.
EncryptionMethod {
Sodium => 0
}
```
EncryptionMethod {
Sodium => 0
}
```
#### `CompressionMethod`
@ -292,12 +300,14 @@ thus also decompress) data.
http://tukaani.org/xz/
- `Lz4` means the LZ4 method as described at http://www.lz4.org
CompressionMethod {
Deflate => 0,
Brotli => 1,
Lzma => 2,
Lz4 => 3
}
```
CompressionMethod {
Deflate => 0,
Brotli => 1,
Lzma => 2,
Lz4 => 3
}
```
#### `FileType`
@ -310,15 +320,16 @@ The `FileType` describes the type of an inode.
- `CharDevice` means a character device
- `NamedPipe` means a named pipe/fifo
FileType {
File => 0,
Directory => 1,
Symlink => 2,
BlockDevice => 3,
CharDevice => 4,
NamedPipe => 5
}
```
FileType {
File => 0,
Directory => 1,
Symlink => 2,
BlockDevice => 3,
CharDevice => 4,
NamedPipe => 5
}
```
### Types
The following types are used to simplify the encoding specifications. They can
@ -329,6 +340,7 @@ used in the encoding specifications instead of their definitions.
#### `Encryption`
The `Encryption` is a combination of an `EncryptionMethod` and a key.
The method specifies how the key was used to encrypt the data.
For the `Sodium` method, the key is the public key used to encrypt the data
with. The secret key needed for decryption, must correspond to that public key.
@ -349,6 +361,7 @@ compression level. The level is only used for compression.
The `BundleHeader` structure contains information on how to decrypt other parts
of a bundle. The structure is encoded using the MessagePack encoding that has
been defined in a previous section.
The `encryption` field contains the information needed to decrypt the rest of
the bundle parts. If the `encryption` option is set, the following parts are
encrypted using the specified method and key, otherwise the parts are not
@ -365,6 +378,7 @@ encrypted. The `info_size` contains the encrypted size of the following
The `BundleInfo` structure contains information on a bundle. The structure is
encoded using the MessagePack encoding that has been defined in a previous
section.
If the `compression` option is set, the chunk data is compressed with the
specified method, otherwise it is uncompressed. The encrypted size of the
following `ChunkList` is stored in the `chunk_list_size` field.
@ -404,20 +418,27 @@ the list in order or appearance in the list.
The `Inode` structure contains information on a backup inode, e.g. a file or
a directory. The structure is encoded using the MessagePack encoding that has
been defined in a previous section.
The `name` field contains the name of this inode which can be concatenated with
the names of all parent inodes (with a platform-dependent seperator) to form the
full path of the inode.
The `size` field contains the raw size of the data in
bytes (this is 0 for everything except files).
The `file_type` specifies the type of this inode.
The `mode` field specifies the permissions of the inode as a number which is
normally interpreted as octal.
The `user` and `group` fields specify the ownership of the inode in the form of
user and group id.
The `timestamp` specifies the modification time of the inode in whole seconds
since the UNIX epoch (1970-01-01 12:00 am).
The `symlink_target` specifies the target of symlink inodes and is only set for
symlinks.
The `data` specifies the data of a file and is only set for regular files. The
data is specified as a tuple of `nesting` and `bytes`. If `nesting` is `0`,
`bytes` contains the data of the file. This "inline" format is only used for
@ -427,17 +448,20 @@ the data of the file. If `nesting` is `2`, `bytes` is also an encoded
`ChunkList`, but the concatenated data of those chunks form again an encoded
`ChunkList` which in turn contains the chunks with the file data. Thus `nesting`
specifies the number of indirection steps via `ChunkList`s.
The `children` field specifies the child inodes of a directory and is only set
for directories. It is a mapping from the name of the child entry to the bytes
of the encoded chunklist of the encoded `Inode` structure of the child. It is
important that the names in the mapping correspond with the names in the
respective child `Inode`s and that the mapping is stored in alphabetic order of
the names.
The `cum_size`, `cum_dirs` and `cum_files` are cumulative values for the inode
as well as the whole subtree (including all children recursively). `cum_size` is
the sum of all inode data sizes plus 1000 bytes for each inode (for encoded
metadata). `cum_dirs` and `cum_files` is the count of directories and
non-directories (symlinks and regular files).
The `xattrs` contains a mapping of all extended attributes of the inode. And
`device` contains a tuple with the major and minor device id if the inode is a
block or character device.
@ -471,6 +495,7 @@ This structure is encoded with the following field default values:
The `BackupHeader` structure contains information on how to decrypt the rest of
the backup file. The structure is encoded using the MessagePack encoding that
has been defined in a previous section.
The `encryption` field contains the information needed to decrypt the rest of
the backup file. If the `encryption` option is set, the rest of the backup file
is encrypted using the specified method and key, otherwise the rest is not
@ -485,8 +510,10 @@ encrypted.
The `Backup` structure contains information on one specific backup and
references the root of the backup file tree. The structure is encoded using the
MessagePack encoding that has been defined in a previous section.
The `root` field contains an encoded `ChunkList` that references the root of the
backup file tree.
The fields `total_data_size`, `changed_data_size`, `deduplicated_data_size` and
`encoded_data_size` list the sizes of the backup in various stages in bytes.
- `total_data_size` gives the cumulative sizes of all entries in the backup.
@ -496,16 +523,21 @@ The fields `total_data_size`, `changed_data_size`, `deduplicated_data_size` and
this backup that have not been stored in the repository yet.
- `encoded_data_size` gives the cumulative encoded (and compressed) size of all
new bundles that have been written specifically to store this backup.
The fields `bundle_count` and `chunk_count` contain the number of new bundles
and chunks that had to be written to store this backup. `avg_chunk_size` is the
average size of new chunks in this backup.
The field `date` specifies the start of the backup run in seconds since the UNIX
epoch and the field `duration` contains the duration of the backup run in
seconds as a floating point number containing also fractions of seconds.
The fields `file_count` and `dir_count` contain the total number of
non-directories and directories in this backup.
The `host` and `path` field contain the host name and the the path on that host
where the root of the backup was located.
The field `config` contains the configuration of zVault during the backup run.
Backup {

55
index/Cargo.lock generated
View File

@ -1,55 +0,0 @@
[root]
name = "index"
version = "0.1.0"
dependencies = [
"mmap 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"quick-error 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "libc"
version = "0.1.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "libc"
version = "0.2.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "mmap"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.1.12 (registry+https://github.com/rust-lang/crates.io-index)",
"tempdir 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "quick-error"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "rand"
version = "0.3.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.21 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "tempdir"
version = "0.3.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"rand 0.3.15 (registry+https://github.com/rust-lang/crates.io-index)",
]
[metadata]
"checksum libc 0.1.12 (registry+https://github.com/rust-lang/crates.io-index)" = "e32a70cf75e5846d53a673923498228bbec6a8624708a9ea5645f075d6276122"
"checksum libc 0.2.21 (registry+https://github.com/rust-lang/crates.io-index)" = "88ee81885f9f04bff991e306fea7c1c60a5f0f9e409e99f6b40e3311a3363135"
"checksum mmap 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "0bc85448a6006dd2ba26a385a564a8a0f1f2c7e78c70f1a70b2e0f4af286b823"
"checksum quick-error 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "0aad603e8d7fb67da22dbdf1f4b826ce8829e406124109e73cf1b2454b93a71c"
"checksum rand 0.3.15 (registry+https://github.com/rust-lang/crates.io-index)" = "022e0636ec2519ddae48154b028864bdce4eaf7d35226ab8e65c611be97b189d"
"checksum tempdir 0.3.5 (registry+https://github.com/rust-lang/crates.io-index)" = "87974a6f5c1dfb344d733055601650059a3363de2a6104819293baff662132d6"

View File

@ -1,8 +0,0 @@
[package]
name = "index"
version = "0.1.0"
authors = ["Dennis Schwerdel <schwerdel@googlemail.com>"]
[dependencies]
mmap = "0.1"
quick-error = "1.1"

9
lang/Makefile Normal file
View File

@ -0,0 +1,9 @@
MO_FILES = de.mo
default: default.pot ${MO_FILES}
default.pot: excluded.po ../src
find ../src -name '*.rs' | xargs xgettext --debug -L python -n -F -a -E --from-code UTF-8 -x ../lang/excluded.po -o default.pot
%.mo : %.po
msgfmt $< -o $@

2215
lang/de.po Normal file

File diff suppressed because it is too large Load Diff

2208
lang/default.pot Normal file

File diff suppressed because it is too large Load Diff

1657
lang/excluded.po Normal file

File diff suppressed because it is too large Load Diff

View File

@ -14,33 +14,33 @@ quick_error!{
pub enum BundleCacheError {
Read(err: io::Error) {
cause(err)
description("Failed to read bundle cache")
display("Bundle cache error: failed to read bundle cache\n\tcaused by: {}", err)
description(tr!("Failed to read bundle cache"))
display("{}", tr_format!("Bundle cache error: failed to read bundle cache\n\tcaused by: {}", err))
}
Write(err: io::Error) {
cause(err)
description("Failed to write bundle cache")
display("Bundle cache error: failed to write bundle cache\n\tcaused by: {}", err)
description(tr!("Failed to write bundle cache"))
display("{}", tr_format!("Bundle cache error: failed to write bundle cache\n\tcaused by: {}", err))
}
WrongHeader {
description("Wrong header")
display("Bundle cache error: wrong header on bundle cache")
description(tr!("Wrong header"))
display("{}", tr_format!("Bundle cache error: wrong header on bundle cache"))
}
UnsupportedVersion(version: u8) {
description("Wrong version")
display("Bundle cache error: unsupported version: {}", version)
description(tr!("Wrong version"))
display("{}", tr_format!("Bundle cache error: unsupported version: {}", version))
}
Decode(err: msgpack::DecodeError) {
from()
cause(err)
description("Failed to decode bundle cache")
display("Bundle cache error: failed to decode bundle cache\n\tcaused by: {}", err)
description(tr!("Failed to decode bundle cache"))
display("{}", tr_format!("Bundle cache error: failed to decode bundle cache\n\tcaused by: {}", err))
}
Encode(err: msgpack::EncodeError) {
from()
cause(err)
description("Failed to encode bundle cache")
display("Bundle cache error: failed to encode bundle cache\n\tcaused by: {}", err)
description(tr!("Failed to encode bundle cache"))
display("{}", tr_format!("Bundle cache error: failed to encode bundle cache\n\tcaused by: {}", err))
}
}
}

View File

@ -14,49 +14,50 @@ quick_error!{
pub enum BundleDbError {
ListBundles(err: io::Error) {
cause(err)
description("Failed to list bundles")
display("Bundle db error: failed to list bundles\n\tcaused by: {}", err)
description(tr!("Failed to list bundles"))
display("{}", tr_format!("Bundle db error: failed to list bundles\n\tcaused by: {}", err))
}
Reader(err: BundleReaderError) {
from()
cause(err)
description("Failed to read bundle")
display("Bundle db error: failed to read bundle\n\tcaused by: {}", err)
description(tr!("Failed to read bundle"))
display("{}", tr_format!("Bundle db error: failed to read bundle\n\tcaused by: {}", err))
}
Writer(err: BundleWriterError) {
from()
cause(err)
description("Failed to write bundle")
display("Bundle db error: failed to write bundle\n\tcaused by: {}", err)
description(tr!("Failed to write bundle"))
display("{}", tr_format!("Bundle db error: failed to write bundle\n\tcaused by: {}", err))
}
Cache(err: BundleCacheError) {
from()
cause(err)
description("Failed to read/write bundle cache")
display("Bundle db error: failed to read/write bundle cache\n\tcaused by: {}", err)
description(tr!("Failed to read/write bundle cache"))
display("{}", tr_format!("Bundle db error: failed to read/write bundle cache\n\tcaused by: {}", err))
}
UploadFailed {
description("Uploading a bundle failed")
description(tr!("Uploading a bundle failed"))
}
Io(err: io::Error, path: PathBuf) {
cause(err)
context(path: &'a Path, err: io::Error) -> (err, path.to_path_buf())
description("Io error")
display("Bundle db error: io error on {:?}\n\tcaused by: {}", path, err)
description(tr!("Io error"))
display("{}", tr_format!("Bundle db error: io error on {:?}\n\tcaused by: {}", path, err))
}
NoSuchBundle(bundle: BundleId) {
description("No such bundle")
display("Bundle db error: no such bundle: {:?}", bundle)
description(tr!("No such bundle"))
display("{}", tr_format!("Bundle db error: no such bundle: {:?}", bundle))
}
Remove(err: io::Error, bundle: BundleId) {
cause(err)
description("Failed to remove bundle")
display("Bundle db error: failed to remove bundle {}\n\tcaused by: {}", bundle, err)
description(tr!("Failed to remove bundle"))
display("{}", tr_format!("Bundle db error: failed to remove bundle {}\n\tcaused by: {}", bundle, err))
}
}
}
#[allow(needless_pass_by_value)]
fn load_bundles(
path: &Path,
base: &Path,
@ -98,8 +99,8 @@ fn load_bundles(
}
};
let bundle = StoredBundle {
info: info,
path: path
info,
path
};
let id = bundle.info.id.clone();
if !bundles.contains_key(&id) {
@ -128,8 +129,8 @@ pub struct BundleDb {
impl BundleDb {
fn new(layout: RepositoryLayout, crypto: Arc<Mutex<Crypto>>) -> Self {
BundleDb {
layout: layout,
crypto: crypto,
layout,
crypto,
uploader: None,
local_bundles: HashMap::new(),
remote_bundles: HashMap::new(),
@ -139,20 +140,21 @@ impl BundleDb {
fn load_bundle_list(
&mut self,
online: bool
) -> Result<(Vec<StoredBundle>, Vec<StoredBundle>), BundleDbError> {
if let Ok(list) = StoredBundle::read_list_from(&self.layout.local_bundle_cache_path()) {
for bundle in list {
self.local_bundles.insert(bundle.id(), bundle);
}
} else {
warn!("Failed to read local bundle cache, rebuilding cache");
tr_warn!("Failed to read local bundle cache, rebuilding cache");
}
if let Ok(list) = StoredBundle::read_list_from(&self.layout.remote_bundle_cache_path()) {
for bundle in list {
self.remote_bundles.insert(bundle.id(), bundle);
}
} else {
warn!("Failed to read remote bundle cache, rebuilding cache");
tr_warn!("Failed to read remote bundle cache, rebuilding cache");
}
let base_path = self.layout.base_path();
let (new, gone) = try!(load_bundles(
@ -168,6 +170,9 @@ impl BundleDb {
&self.layout.local_bundle_cache_path()
));
}
if !online {
return Ok((vec![], vec![]))
}
let (new, gone) = try!(load_bundles(
&self.layout.remote_bundles_path(),
base_path,
@ -195,10 +200,11 @@ impl BundleDb {
&self.layout.local_bundle_cache_path()
));
let bundles: Vec<_> = self.remote_bundles.values().cloned().collect();
Ok(try!(StoredBundle::save_list_to(
try!(StoredBundle::save_list_to(
&bundles,
&self.layout.remote_bundle_cache_path()
)))
));
Ok(())
}
fn update_cache(&mut self) -> Result<(), BundleDbError> {
@ -217,7 +223,7 @@ impl BundleDb {
for id in meta_bundles {
if !self.local_bundles.contains_key(&id) {
let bundle = self.remote_bundles[&id].clone();
debug!("Copying new meta bundle to local cache: {}", bundle.info.id);
tr_debug!("Copying new meta bundle to local cache: {}", bundle.info.id);
try!(self.copy_remote_bundle_to_cache(&bundle));
}
}
@ -235,16 +241,17 @@ impl BundleDb {
pub fn open(
layout: RepositoryLayout,
crypto: Arc<Mutex<Crypto>>,
online: bool
) -> Result<(Self, Vec<BundleInfo>, Vec<BundleInfo>), BundleDbError> {
let mut self_ = Self::new(layout, crypto);
let (new, gone) = try!(self_.load_bundle_list());
let (new, gone) = try!(self_.load_bundle_list(online));
try!(self_.update_cache());
let new = new.into_iter().map(|s| s.info).collect();
let gone = gone.into_iter().map(|s| s.info).collect();
Ok((self_, new, gone))
}
pub fn create(layout: RepositoryLayout) -> Result<(), BundleDbError> {
pub fn create(layout: &RepositoryLayout) -> Result<(), BundleDbError> {
try!(fs::create_dir_all(layout.remote_bundles_path()).context(
&layout.remote_bundles_path() as
&Path
@ -405,7 +412,7 @@ impl BundleDb {
pub fn check(&mut self, full: bool, repair: bool) -> Result<bool, BundleDbError> {
let mut to_repair = vec![];
for (id, stored) in ProgressIter::new(
"checking bundles",
tr!("checking bundles"),
self.remote_bundles.len(),
self.remote_bundles.iter()
)
@ -431,8 +438,8 @@ impl BundleDb {
}
}
if !to_repair.is_empty() {
for id in ProgressIter::new("repairing bundles", to_repair.len(), to_repair.iter()) {
try!(self.repair_bundle(id.clone()));
for id in ProgressIter::new(tr!("repairing bundles"), to_repair.len(), to_repair.iter()) {
try!(self.repair_bundle(id));
}
try!(self.flush());
}
@ -453,12 +460,12 @@ impl BundleDb {
Ok(())
}
fn repair_bundle(&mut self, id: BundleId) -> Result<(), BundleDbError> {
let stored = self.remote_bundles[&id].clone();
fn repair_bundle(&mut self, id: &BundleId) -> Result<(), BundleDbError> {
let stored = self.remote_bundles[id].clone();
let mut bundle = match self.get_bundle(&stored) {
Ok(bundle) => bundle,
Err(err) => {
warn!(
tr_warn!(
"Problem detected: failed to read bundle header: {}\n\tcaused by: {}",
id,
err
@ -469,7 +476,7 @@ impl BundleDb {
let chunks = match bundle.get_chunk_list() {
Ok(chunks) => chunks.clone(),
Err(err) => {
warn!(
tr_warn!(
"Problem detected: failed to read bundle chunks: {}\n\tcaused by: {}",
id,
err
@ -480,7 +487,7 @@ impl BundleDb {
let data = match bundle.load_contents() {
Ok(data) => data,
Err(err) => {
warn!(
tr_warn!(
"Problem detected: failed to read bundle data: {}\n\tcaused by: {}",
id,
err
@ -488,8 +495,8 @@ impl BundleDb {
return self.evacuate_broken_bundle(stored);
}
};
warn!("Problem detected: bundle data was truncated: {}", id);
info!("Copying readable data into new bundle");
tr_warn!("Problem detected: bundle data was truncated: {}", id);
tr_info!("Copying readable data into new bundle");
let info = stored.info.clone();
let mut new_bundle = try!(self.create_bundle(
info.mode,
@ -507,7 +514,7 @@ impl BundleDb {
pos += len as usize;
}
let bundle = try!(self.add_bundle(new_bundle));
info!("New bundle id is {}", bundle.id);
tr_info!("New bundle id is {}", bundle.id);
self.evacuate_broken_bundle(stored)
}
@ -515,4 +522,30 @@ impl BundleDb {
pub fn len(&self) -> usize {
self.remote_bundles.len()
}
pub fn statistics(&self) -> BundleStatistics {
let bundles = self.list_bundles();
let bundles_meta: Vec<_> = bundles.iter().filter(|b| b.mode == BundleMode::Meta).collect();
let bundles_data: Vec<_> = bundles.iter().filter(|b| b.mode == BundleMode::Data).collect();
let mut hash_methods = HashMap::new();
let mut compressions = HashMap::new();
let mut encryptions = HashMap::new();
for bundle in &bundles {
*hash_methods.entry(bundle.hash_method).or_insert(0) += 1;
*compressions.entry(bundle.compression.clone()).or_insert(0) += 1;
*encryptions.entry(bundle.encryption.clone()).or_insert(0) += 1;
}
BundleStatistics {
hash_methods, compressions, encryptions,
raw_size: ValueStats::from_iter(|| bundles.iter().map(|b| b.raw_size as f32)),
encoded_size: ValueStats::from_iter(|| bundles.iter().map(|b| b.encoded_size as f32)),
chunk_count: ValueStats::from_iter(|| bundles.iter().map(|b| b.chunk_count as f32)),
raw_size_meta: ValueStats::from_iter(|| bundles_meta.iter().map(|b| b.raw_size as f32)),
encoded_size_meta: ValueStats::from_iter(|| bundles_meta.iter().map(|b| b.encoded_size as f32)),
chunk_count_meta: ValueStats::from_iter(|| bundles_meta.iter().map(|b| b.chunk_count as f32)),
raw_size_data: ValueStats::from_iter(|| bundles_data.iter().map(|b| b.raw_size as f32)),
encoded_size_data: ValueStats::from_iter(|| bundles_data.iter().map(|b| b.encoded_size as f32)),
chunk_count_data: ValueStats::from_iter(|| bundles_data.iter().map(|b| b.chunk_count as f32))
}
}
}

View File

@ -13,6 +13,7 @@ pub use self::uploader::BundleUploader;
use prelude::*;
use std::fmt;
use std::collections::HashMap;
use serde;
use rand;
@ -133,3 +134,20 @@ impl Default for BundleInfo {
}
}
}
#[derive(Debug)]
pub struct BundleStatistics {
pub raw_size: ValueStats,
pub encoded_size: ValueStats,
pub chunk_count: ValueStats,
pub raw_size_meta: ValueStats,
pub encoded_size_meta: ValueStats,
pub chunk_count_meta: ValueStats,
pub raw_size_data: ValueStats,
pub encoded_size_data: ValueStats,
pub chunk_count_data: ValueStats,
pub hash_methods: HashMap<HashMethod, usize>,
pub compressions: HashMap<Option<Compression>, usize>,
pub encryptions: HashMap<Option<Encryption>, usize>
}

View File

@ -15,42 +15,42 @@ quick_error!{
Read(err: io::Error, path: PathBuf) {
cause(err)
context(path: &'a Path, err: io::Error) -> (err, path.to_path_buf())
description("Failed to read data from file")
display("Bundle reader error: failed to read data from file {:?}\n\tcaused by: {}", path, err)
description(tr!("Failed to read data from file"))
display("{}", tr_format!("Bundle reader error: failed to read data from file {:?}\n\tcaused by: {}", path, err))
}
WrongHeader(path: PathBuf) {
description("Wrong header")
display("Bundle reader error: wrong header on bundle {:?}", path)
description(tr!("Wrong header"))
display("{}", tr_format!("Bundle reader error: wrong header on bundle {:?}", path))
}
UnsupportedVersion(path: PathBuf, version: u8) {
description("Wrong version")
display("Bundle reader error: unsupported version on bundle {:?}: {}", path, version)
description(tr!("Wrong version"))
display("{}", tr_format!("Bundle reader error: unsupported version on bundle {:?}: {}", path, version))
}
NoSuchChunk(bundle: BundleId, id: usize) {
description("Bundle has no such chunk")
display("Bundle reader error: bundle {:?} has no chunk with id {}", bundle, id)
description(tr!("Bundle has no such chunk"))
display("{}", tr_format!("Bundle reader error: bundle {:?} has no chunk with id {}", bundle, id))
}
Decode(err: msgpack::DecodeError, path: PathBuf) {
cause(err)
context(path: &'a Path, err: msgpack::DecodeError) -> (err, path.to_path_buf())
description("Failed to decode bundle header")
display("Bundle reader error: failed to decode bundle header of {:?}\n\tcaused by: {}", path, err)
description(tr!("Failed to decode bundle header"))
display("{}", tr_format!("Bundle reader error: failed to decode bundle header of {:?}\n\tcaused by: {}", path, err))
}
Decompression(err: CompressionError, path: PathBuf) {
cause(err)
context(path: &'a Path, err: CompressionError) -> (err, path.to_path_buf())
description("Decompression failed")
display("Bundle reader error: decompression failed on bundle {:?}\n\tcaused by: {}", path, err)
description(tr!("Decompression failed"))
display("{}", tr_format!("Bundle reader error: decompression failed on bundle {:?}\n\tcaused by: {}", path, err))
}
Decryption(err: EncryptionError, path: PathBuf) {
cause(err)
context(path: &'a Path, err: EncryptionError) -> (err, path.to_path_buf())
description("Decryption failed")
display("Bundle reader error: decryption failed on bundle {:?}\n\tcaused by: {}", path, err)
description(tr!("Decryption failed"))
display("{}", tr_format!("Bundle reader error: decryption failed on bundle {:?}\n\tcaused by: {}", path, err))
}
Integrity(bundle: BundleId, reason: &'static str) {
description("Bundle has an integrity error")
display("Bundle reader error: bundle {:?} has an integrity error: {}", bundle, reason)
description(tr!("Bundle has an integrity error"))
display("{}", tr_format!("Bundle reader error: bundle {:?} has an integrity error: {}", bundle, reason))
}
}
}
@ -75,12 +75,12 @@ impl BundleReader {
info: BundleInfo,
) -> Self {
BundleReader {
info: info,
info,
chunks: None,
version: version,
path: path,
crypto: crypto,
content_start: content_start,
version,
path,
crypto,
content_start,
chunk_positions: None
}
}
@ -90,6 +90,7 @@ impl BundleReader {
self.info.id.clone()
}
#[allow(needless_pass_by_value)]
fn load_header<P: AsRef<Path>>(
path: P,
crypto: Arc<Mutex<Crypto>>,
@ -150,7 +151,7 @@ impl BundleReader {
}
fn load_chunklist(&mut self) -> Result<(), BundleReaderError> {
debug!(
tr_debug!(
"Load bundle chunklist {} ({:?})",
self.info.id,
self.info.mode
@ -196,7 +197,7 @@ impl BundleReader {
}
fn load_encoded_contents(&self) -> Result<Vec<u8>, BundleReaderError> {
debug!("Load bundle data {} ({:?})", self.info.id, self.info.mode);
tr_debug!("Load bundle data {} ({:?})", self.info.id, self.info.mode);
let mut file = BufReader::new(try!(File::open(&self.path).context(&self.path as &Path)));
try!(
file.seek(SeekFrom::Start(self.content_start as u64))
@ -255,7 +256,7 @@ impl BundleReader {
if self.info.chunk_count != self.chunks.as_ref().unwrap().len() {
return Err(BundleReaderError::Integrity(
self.id(),
"Chunk list size does not match chunk count"
tr!("Chunk list size does not match chunk count")
));
}
if self.chunks
@ -267,7 +268,7 @@ impl BundleReader {
{
return Err(BundleReaderError::Integrity(
self.id(),
"Individual chunk sizes do not add up to total size"
tr!("Individual chunk sizes do not add up to total size")
));
}
if !full {
@ -275,7 +276,7 @@ impl BundleReader {
if size as usize != self.info.encoded_size + self.content_start {
return Err(BundleReaderError::Integrity(
self.id(),
"File size does not match size in header, truncated file"
tr!("File size does not match size in header, truncated file")
));
}
return Ok(());
@ -284,32 +285,41 @@ impl BundleReader {
if self.info.encoded_size != encoded_contents.len() {
return Err(BundleReaderError::Integrity(
self.id(),
"Encoded data size does not match size in header, truncated bundle"
tr!("Encoded data size does not match size in header, truncated bundle")
));
}
let contents = try!(self.decode_contents(encoded_contents));
if self.info.raw_size != contents.len() {
return Err(BundleReaderError::Integrity(
self.id(),
"Raw data size does not match size in header, truncated bundle"
tr!("Raw data size does not match size in header, truncated bundle")
));
}
//TODO: verify checksum
let mut pos = 0;
for chunk in self.chunks.as_ref().unwrap().as_ref() {
let data = &contents[pos..pos+chunk.1 as usize];
if self.info.hash_method.hash(data) != chunk.0 {
return Err(BundleReaderError::Integrity(
self.id(),
tr!("Stored hash does not match hash in header, modified data")
));
}
pos += chunk.1 as usize;
}
Ok(())
}
}
impl Debug for BundleReader {
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(
fmt,
"Bundle(\n\tid: {}\n\tpath: {:?}\n\tchunks: {}\n\tsize: {}, encoded: {}\n\tcompression: {:?}\n)",
write!(fmt, "{}",
tr_format!("Bundle(\n\tid: {}\n\tpath: {:?}\n\tchunks: {}\n\tsize: {}, encoded: {}\n\tcompression: {:?}\n)",
self.info.id.to_string(),
self.path,
self.info.chunk_count,
self.info.raw_size,
self.info.encoded_size,
self.info.compression
)
))
}
}

View File

@ -20,7 +20,7 @@ pub struct BundleUploader {
impl BundleUploader {
pub fn new(capacity: usize) -> Arc<Self> {
let self_ = Arc::new(BundleUploader {
capacity: capacity,
capacity,
error_present: AtomicBool::new(false),
error: Mutex::new(None),
waiting: AtomicUsize::new(0),
@ -51,10 +51,10 @@ impl BundleUploader {
pub fn queue(&self, local_path: PathBuf, remote_path: PathBuf) -> Result<(), BundleDbError> {
while self.waiting.load(Ordering::SeqCst) >= self.capacity {
debug!("Upload queue is full, waiting for slots");
tr_debug!("Upload queue is full, waiting for slots");
let _ = self.wait.0.wait(self.wait.1.lock().unwrap()).unwrap();
}
trace!("Adding to upload queue: {:?}", local_path);
tr_trace!("Adding to upload queue: {:?}", local_path);
if !self.error_present.load(Ordering::SeqCst) {
self.waiting.fetch_add(1, Ordering::SeqCst);
self.queue.push(Some((local_path, remote_path)));
@ -75,21 +75,21 @@ impl BundleUploader {
fn worker_thread_inner(&self) -> Result<(), BundleDbError> {
while let Some((src_path, dst_path)) = self.queue.pop() {
trace!("Uploading {:?} to {:?}", src_path, dst_path);
tr_trace!("Uploading {:?} to {:?}", src_path, dst_path);
self.waiting.fetch_sub(1, Ordering::SeqCst);
self.wait.0.notify_all();
let folder = dst_path.parent().unwrap();
try!(fs::create_dir_all(&folder).context(folder as &Path));
try!(fs::copy(&src_path, &dst_path).context(&dst_path as &Path));
try!(fs::remove_file(&src_path).context(&src_path as &Path));
debug!("Uploaded {:?} to {:?}", src_path, dst_path);
tr_debug!("Uploaded {:?} to {:?}", src_path, dst_path);
}
Ok(())
}
fn worker_thread(&self) {
if let Err(err) = self.worker_thread_inner() {
debug!("Upload thread failed with error: {}", err);
tr_debug!("Upload thread failed with error: {}", err);
*self.error.lock().unwrap() = Some(err);
self.error_present.store(true, Ordering::SeqCst);
}

View File

@ -14,31 +14,31 @@ quick_error!{
pub enum BundleWriterError {
CompressionSetup(err: CompressionError) {
cause(err)
description("Failed to setup compression")
display("Bundle writer error: failed to setup compression\n\tcaused by: {}", err)
description(tr!("Failed to setup compression"))
display("{}", tr_format!("Bundle writer error: failed to setup compression\n\tcaused by: {}", err))
}
Compression(err: CompressionError) {
cause(err)
description("Failed to compress data")
display("Bundle writer error: failed to compress data\n\tcaused by: {}", err)
description(tr!("Failed to compress data"))
display("{}", tr_format!("Bundle writer error: failed to compress data\n\tcaused by: {}", err))
}
Encryption(err: EncryptionError) {
from()
cause(err)
description("Encryption failed")
display("Bundle writer error: failed to encrypt data\n\tcaused by: {}", err)
description(tr!("Encryption failed"))
display("{}", tr_format!("Bundle writer error: failed to encrypt data\n\tcaused by: {}", err))
}
Encode(err: msgpack::EncodeError, path: PathBuf) {
cause(err)
context(path: &'a Path, err: msgpack::EncodeError) -> (err, path.to_path_buf())
description("Failed to encode bundle header to file")
display("Bundle writer error: failed to encode bundle header to file {:?}\n\tcaused by: {}", path, err)
description(tr!("Failed to encode bundle header to file"))
display("{}", tr_format!("Bundle writer error: failed to encode bundle header to file {:?}\n\tcaused by: {}", path, err))
}
Write(err: io::Error, path: PathBuf) {
cause(err)
context(path: &'a Path, err: io::Error) -> (err, path.to_path_buf())
description("Failed to write data to file")
display("Bundle writer error: failed to write data to file {:?}\n\tcaused by: {}", path, err)
description(tr!("Failed to write data to file"))
display("{}", tr_format!("Bundle writer error: failed to write data to file {:?}\n\tcaused by: {}", path, err))
}
}
}
@ -72,13 +72,13 @@ impl BundleWriter {
None => None,
};
Ok(BundleWriter {
mode: mode,
hash_method: hash_method,
mode,
hash_method,
data: vec![],
compression: compression,
compression_stream: compression_stream,
encryption: encryption,
crypto: crypto,
compression,
compression_stream,
encryption,
crypto,
raw_size: 0,
chunk_count: 0,
chunks: ChunkList::new()
@ -127,7 +127,7 @@ impl BundleWriter {
chunk_count: self.chunk_count,
id: id.clone(),
raw_size: self.raw_size,
encoded_size: encoded_size,
encoded_size,
chunk_list_size: chunk_data.len(),
timestamp: Local::now().timestamp()
};
@ -149,8 +149,8 @@ impl BundleWriter {
.unwrap()
.to_path_buf();
Ok(StoredBundle {
path: path,
info: info
path,
info
})
}

View File

@ -25,14 +25,14 @@ impl ChunkerType {
"rabin" => Ok(ChunkerType::Rabin((avg_size, seed as u32))),
"fastcdc" => Ok(ChunkerType::FastCdc((avg_size, seed))),
"fixed" => Ok(ChunkerType::Fixed(avg_size)),
_ => Err("Unsupported chunker type"),
_ => Err(tr!("Unsupported chunker type")),
}
}
pub fn from_string(name: &str) -> Result<Self, &'static str> {
let (name, size) = if let Some(pos) = name.find('/') {
let size = try!(usize::from_str(&name[pos + 1..]).map_err(
|_| "Chunk size must be a number"
|_| tr!("Chunk size must be a number")
));
let name = &name[..pos];
(name, size)
@ -79,7 +79,7 @@ impl ChunkerType {
match *self {
ChunkerType::Ae(_size) |
ChunkerType::Fixed(_size) => 0,
ChunkerType::Rabin((_size, seed)) => seed as u64,
ChunkerType::Rabin((_size, seed)) => u64::from(seed),
ChunkerType::FastCdc((_size, seed)) => seed,
}
}

View File

@ -7,7 +7,7 @@ use std::ptr;
pub struct AeChunker {
buffer: [u8; 4096],
buffer: [u8; 0x1000],
buffered: usize,
window_size: usize
}
@ -18,16 +18,16 @@ impl AeChunker {
//let window_size = (avg_size as f64 / (consts::E - 1.0)) as usize;
let window_size = avg_size - 256;
AeChunker{
buffer: [0; 4096],
buffer: [0; 0x1000],
buffered: 0,
window_size: window_size,
window_size,
}
}
}
impl Chunker for AeChunker {
#[allow(unknown_lints,explicit_counter_loop)]
fn chunk(&mut self, r: &mut Read, mut w: &mut Write) -> Result<ChunkerStatus, ChunkerError> {
fn chunk(&mut self, r: &mut Read, w: &mut Write) -> Result<ChunkerStatus, ChunkerError> {
let mut max;
let mut pos = 0;
let mut max_pos = 0;

View File

@ -1,8 +1,3 @@
#![feature(test)]
extern crate test;
extern crate chunking;
use chunking::*;
use std::io::{self, Write, Cursor};
@ -26,10 +21,22 @@ fn random_data(seed: u64, size: usize) -> Vec<u8> {
}
struct DevNull;
struct CutPositions(Vec<u64>, u64);
impl Write for DevNull {
impl CutPositions {
pub fn new() -> Self {
CutPositions(vec![], 0)
}
pub fn positions(&self) -> &[u64] {
&self.0
}
}
impl Write for CutPositions {
fn write(&mut self, data: &[u8]) -> Result<usize, io::Error> {
self.1 += data.len() as u64;
self.0.push(self.1);
Ok(data.len())
}
@ -53,7 +60,9 @@ fn test_fixed_8192(b: &mut Bencher) {
b.iter(|| {
let mut chunker = FixedChunker::new(8*1024);
let mut cursor = Cursor::new(&data);
while chunker.chunk(&mut cursor, &mut DevNull).unwrap() == ChunkerStatus::Continue {}
let mut sink = CutPositions::new();
while chunker.chunk(&mut cursor, &mut sink).unwrap() == ChunkerStatus::Continue {};
test::black_box(sink.positions().len())
})
}
@ -72,7 +81,9 @@ fn test_ae_8192(b: &mut Bencher) {
b.iter(|| {
let mut chunker = AeChunker::new(8*1024);
let mut cursor = Cursor::new(&data);
while chunker.chunk(&mut cursor, &mut DevNull).unwrap() == ChunkerStatus::Continue {}
let mut sink = CutPositions::new();
while chunker.chunk(&mut cursor, &mut sink).unwrap() == ChunkerStatus::Continue {};
test::black_box(sink.positions().len())
})
}
@ -91,7 +102,9 @@ fn test_rabin_8192(b: &mut Bencher) {
b.iter(|| {
let mut chunker = RabinChunker::new(8*1024, 0);
let mut cursor = Cursor::new(&data);
while chunker.chunk(&mut cursor, &mut DevNull).unwrap() == ChunkerStatus::Continue {}
let mut sink = CutPositions::new();
while chunker.chunk(&mut cursor, &mut sink).unwrap() == ChunkerStatus::Continue {};
test::black_box(sink.positions().len())
})
}
@ -110,6 +123,8 @@ fn test_fastcdc_8192(b: &mut Bencher) {
b.iter(|| {
let mut chunker = FastCdcChunker::new(8*1024, 0);
let mut cursor = Cursor::new(&data);
while chunker.chunk(&mut cursor, &mut DevNull).unwrap() == ChunkerStatus::Continue {}
let mut sink = CutPositions::new();
while chunker.chunk(&mut cursor, &mut sink).unwrap() == ChunkerStatus::Continue {};
test::black_box(sink.positions().len())
})
}

130
src/chunking/fastcdc.rs Normal file
View File

@ -0,0 +1,130 @@
use super::*;
use std::ptr;
use std::cmp;
// FastCDC
// Paper: "FastCDC: a Fast and Efficient Content-Defined Chunking Approach for Data Deduplication"
// Paper-URL: https://www.usenix.org/system/files/conference/atc16/atc16-paper-xia.pdf
// Presentation: https://www.usenix.org/sites/default/files/conference/protected-files/atc16_slides_xia.pdf
// Creating 256 pseudo-random values (based on Knuth's MMIX)
fn create_gear(seed: u64) -> [u64; 256] {
let mut table = [0u64; 256];
let a = 6_364_136_223_846_793_005;
let c = 1_442_695_040_888_963_407;
let mut v = seed;
for t in &mut table.iter_mut() {
v = v.wrapping_mul(a).wrapping_add(c);
*t = v;
}
table
}
fn get_masks(avg_size: usize, nc_level: usize, seed: u64) -> (u64, u64) {
let bits = (avg_size.next_power_of_two() - 1).count_ones();
if bits == 13 {
// From the paper
return (0x0003_5907_0353_0000, 0x0000_d900_0353_0000);
}
let mut mask = 0u64;
let mut v = seed;
let a = 6_364_136_223_846_793_005;
let c = 1_442_695_040_888_963_407;
while mask.count_ones() < bits - nc_level as u32 {
v = v.wrapping_mul(a).wrapping_add(c);
mask = (mask | 1).rotate_left(v as u32 & 0x3f);
}
let mask_long = mask;
while mask.count_ones() < bits + nc_level as u32 {
v = v.wrapping_mul(a).wrapping_add(c);
mask = (mask | 1).rotate_left(v as u32 & 0x3f);
}
let mask_short = mask;
(mask_short, mask_long)
}
pub struct FastCdcChunker {
buffer: [u8; 0x1000],
buffered: usize,
gear: [u64; 256],
min_size: usize,
max_size: usize,
avg_size: usize,
mask_long: u64,
mask_short: u64,
}
impl FastCdcChunker {
pub fn new(avg_size: usize, seed: u64) -> Self {
let (mask_short, mask_long) = get_masks(avg_size, 2, seed);
FastCdcChunker {
buffer: [0; 0x1000],
buffered: 0,
gear: create_gear(seed),
min_size: avg_size/4,
max_size: avg_size*8,
avg_size,
mask_long,
mask_short,
}
}
}
impl FastCdcChunker {
fn write_output(&mut self, w: &mut Write, pos: usize, max: usize) -> Result<ChunkerStatus, ChunkerError> {
debug_assert!(max <= self.buffer.len());
debug_assert!(pos <= self.buffer.len());
try!(w.write_all(&self.buffer[..pos]).map_err(ChunkerError::Write));
unsafe { ptr::copy(self.buffer[pos..].as_ptr(), self.buffer.as_mut_ptr(), max-pos) };
self.buffered = max-pos;
Ok(ChunkerStatus::Continue)
}
}
impl Chunker for FastCdcChunker {
#[allow(unknown_lints,explicit_counter_loop,needless_range_loop)]
fn chunk(&mut self, r: &mut Read, w: &mut Write) -> Result<ChunkerStatus, ChunkerError> {
let mut max;
let mut hash = 0u64;
let mut pos = 0;
loop {
// Fill the buffer, there might be some bytes still in there from last chunk
max = try!(r.read(&mut self.buffer[self.buffered..]).map_err(ChunkerError::Read)) + self.buffered;
// If nothing to do, finish
if max == 0 {
return Ok(ChunkerStatus::Finished)
}
let min_size_p = cmp::min(max, cmp::max(self.min_size as isize - pos as isize, 0) as usize);
let avg_size_p = cmp::min(max, cmp::max(self.avg_size as isize - pos as isize, 0) as usize);
let max_size_p = cmp::min(max, cmp::max(self.max_size as isize - pos as isize, 0) as usize);
// Skipping first min_size bytes. This is ok as same data still results in same hash.
if self.avg_size > pos {
for i in min_size_p..avg_size_p {
hash = (hash << 1).wrapping_add(self.gear[self.buffer[i] as usize]);
if hash & self.mask_short == 0 {
return self.write_output(w, i + 1, max);
}
}
}
if self.max_size > pos {
for i in avg_size_p..max_size_p {
hash = (hash << 1).wrapping_add(self.gear[self.buffer[i] as usize]);
if hash & self.mask_long == 0 {
return self.write_output(w, i+1, max);
}
}
}
if max + pos >= self.max_size {
return self.write_output(w, max_size_p, max);
}
pos += max;
try!(w.write_all(&self.buffer[..max]).map_err(ChunkerError::Write));
self.buffered = 0;
}
}
}

View File

@ -4,14 +4,14 @@ use std::cmp::min;
pub struct FixedChunker {
buffer: [u8; 4096],
buffer: [u8; 0x1000],
size: usize
}
impl FixedChunker {
pub fn new(avg_size: usize) -> FixedChunker {
FixedChunker{
buffer: [0; 4096],
buffer: [0; 0x1000],
size: avg_size,
}
}
@ -19,7 +19,7 @@ impl FixedChunker {
impl Chunker for FixedChunker {
#[allow(unknown_lints,explicit_counter_loop)]
fn chunk(&mut self, r: &mut Read, mut w: &mut Write) -> Result<ChunkerStatus, ChunkerError> {
fn chunk(&mut self, r: &mut Read, w: &mut Write) -> Result<ChunkerStatus, ChunkerError> {
let mut todo = self.size;
loop {
// Fill the buffer, there might be some bytes still in there from last chunk

View File

@ -1,11 +1,11 @@
#[macro_use] extern crate quick_error;
use std::io::{self, Write, Read};
mod fixed;
mod ae;
mod rabin;
mod fastcdc;
#[cfg(test)] mod test;
#[cfg(feature = "bench")] mod benches;
pub use self::fixed::FixedChunker;
pub use self::ae::AeChunker;
@ -25,18 +25,18 @@ quick_error!{
pub enum ChunkerError {
Read(err: io::Error) {
cause(err)
description("Failed to read input")
display("Chunker error: failed to read input\n\tcaused by: {}", err)
description(tr!("Failed to read input"))
display("{}", tr_format!("Chunker error: failed to read input\n\tcaused by: {}", err))
}
Write(err: io::Error) {
cause(err)
description("Failed to write to output")
display("Chunker error: failed to write to output\n\tcaused by: {}", err)
description(tr!("Failed to write to output"))
display("{}", tr_format!("Chunker error: failed to write to output\n\tcaused by: {}", err))
}
Custom(reason: &'static str) {
from()
description("Custom error")
display("Chunker error: {}", reason)
description(tr!("Custom error"))
display("{}", tr_format!("Chunker error: {}", reason))
}
}
}

View File

@ -34,7 +34,7 @@ fn create_table(alpha: u32, window_size: usize) -> [u32; 256] {
pub struct RabinChunker {
buffer: [u8; 4096],
buffer: [u8; 0x1000],
buffered: usize,
seed: u32,
alpha: u32,
@ -50,24 +50,24 @@ impl RabinChunker {
pub fn new(avg_size: usize, seed: u32) -> Self {
let chunk_mask = (avg_size as u32).next_power_of_two() - 1;
let window_size = avg_size/4-1;
let alpha = 1664525;//153191;
let alpha = 1_664_525;//153191;
RabinChunker {
buffer: [0; 4096],
buffer: [0; 0x1000],
buffered: 0,
table: create_table(alpha, window_size),
alpha: alpha,
seed: seed,
alpha,
seed,
min_size: avg_size/4,
max_size: avg_size*4,
window_size: window_size,
chunk_mask: chunk_mask,
window_size,
chunk_mask,
}
}
}
impl Chunker for RabinChunker {
#[allow(unknown_lints,explicit_counter_loop)]
fn chunk(&mut self, r: &mut Read, mut w: &mut Write) -> Result<ChunkerStatus, ChunkerError> {
fn chunk(&mut self, r: &mut Read, w: &mut Write) -> Result<ChunkerStatus, ChunkerError> {
let mut max;
let mut hash = 0u32;
let mut pos = 0;
@ -88,7 +88,7 @@ impl Chunker for RabinChunker {
return Ok(ChunkerStatus::Continue);
}
// Hash update
hash = hash.wrapping_mul(self.alpha).wrapping_add(val as u32);
hash = hash.wrapping_mul(self.alpha).wrapping_add(u32::from(val));
if pos >= self.window_size {
let take = window.pop_front().unwrap();
hash = hash.wrapping_sub(self.table[take as usize]);

View File

@ -1,6 +1,4 @@
extern crate chunking;
use chunking::*;
use super::*;
use std::io::Cursor;
@ -21,7 +19,7 @@ fn random_data(seed: u64, size: usize) -> Vec<u8> {
data
}
fn test_chunking(chunker: &mut Chunker, data: &[u8]) -> usize {
fn test_chunking(chunker: &mut Chunker, data: &[u8], chunk_lens: Option<&[usize]>) -> usize {
let mut cursor = Cursor::new(&data);
let mut chunks = vec![];
let mut chunk = vec![];
@ -36,6 +34,12 @@ fn test_chunking(chunker: &mut Chunker, data: &[u8]) -> usize {
assert_eq!(&data[pos..pos+chunk.len()], chunk as &[u8]);
pos += chunk.len();
}
if let Some(chunk_lens) = chunk_lens {
assert_eq!(chunk_lens.len(), chunks.len());
for (i, chunk) in chunks.iter().enumerate() {
assert_eq!(chunk.len(), chunk_lens[i]);
}
}
assert_eq!(pos, data.len());
chunks.len()
}
@ -43,10 +47,13 @@ fn test_chunking(chunker: &mut Chunker, data: &[u8]) -> usize {
#[test]
fn test_fixed() {
test_chunking(&mut FixedChunker::new(8192), &random_data(0, 128*1024),
Some(&[8192, 8192, 8192, 8192, 8192, 8192, 8192, 8192, 8192, 8192,
8192, 8192, 8192, 8192, 8192, 8192, 0]));
let data = random_data(0, 10*1024*1024);
for n in &[1usize,2,4,8,16,32,64,128,256,512,1024] {
let mut chunker = FixedChunker::new(1024*n);
let len = test_chunking(&mut chunker, &data);
let len = test_chunking(&mut chunker, &data, None);
assert!(len >= data.len()/n/1024/4);
assert!(len <= data.len()/n/1024*4);
}
@ -54,10 +61,13 @@ fn test_fixed() {
#[test]
fn test_ae() {
test_chunking(&mut AeChunker::new(8192), &random_data(0, 128*1024),
Some(&[7979, 8046, 7979, 8192, 8192, 8192, 7965, 8158, 8404, 8241,
8011, 8302, 8120, 8335, 8192, 8192, 572]));
let data = random_data(0, 10*1024*1024);
for n in &[1usize,2,4,8,16,32,64,128,256,512,1024] {
let mut chunker = AeChunker::new(1024*n);
let len = test_chunking(&mut chunker, &data);
let len = test_chunking(&mut chunker, &data, None);
assert!(len >= data.len()/n/1024/4);
assert!(len <= data.len()/n/1024*4);
}
@ -65,10 +75,13 @@ fn test_ae() {
#[test]
fn test_rabin() {
test_chunking(&mut RabinChunker::new(8192, 0), &random_data(0, 128*1024),
Some(&[8604, 4190, 32769, 3680, 26732, 3152, 9947, 6487, 25439, 3944,
6128]));
let data = random_data(0, 10*1024*1024);
for n in &[1usize,2,4,8,16,32,64,128,256,512,1024] {
let mut chunker = RabinChunker::new(1024*n, 0);
let len = test_chunking(&mut chunker, &data);
let len = test_chunking(&mut chunker, &data, None);
assert!(len >= data.len()/n/1024/4);
assert!(len <= data.len()/n/1024*4);
}
@ -76,10 +89,13 @@ fn test_rabin() {
#[test]
fn test_fastcdc() {
test_chunking(&mut FastCdcChunker::new(8192, 0), &random_data(0, 128*1024),
Some(&[8712, 8018, 2847, 9157, 8997, 8581, 8867, 5422, 5412, 9478,
11553, 9206, 4606, 8529, 3821, 11342, 6524]));
let data = random_data(0, 10*1024*1024);
for n in &[1usize,2,4,8,16,32,64,128,256,512,1024] {
let mut chunker = FastCdcChunker::new(1024*n, 0);
let len = test_chunking(&mut chunker, &data);
let len = test_chunking(&mut chunker, &data, None);
assert!(len >= data.len()/n/1024/4);
assert!(len <= data.len()/n/1024*4);
}

View File

@ -52,7 +52,7 @@ pub fn run(
let mut total_write_time = 0.0;
let mut total_read_time = 0.0;
println!("Reading input file ...");
tr_println!("Reading input file ...");
let mut file = File::open(path).unwrap();
let total_size = file.metadata().unwrap().len();
let mut size = total_size;
@ -67,7 +67,7 @@ pub fn run(
println!();
println!(
tr_println!(
"Chunking data with {}, avg chunk size {} ...",
chunker.name(),
to_file_size(chunker.avg_size() as u64)
@ -95,7 +95,7 @@ pub fn run(
.sum::<f32>() /
(chunks.len() as f32 - 1.0))
.sqrt();
println!(
tr_println!(
"- {} chunks, avg size: {} ±{}",
chunks.len(),
to_file_size(chunk_size_avg as u64),
@ -104,7 +104,7 @@ pub fn run(
println!();
println!("Hashing chunks with {} ...", hash.name());
tr_println!("Hashing chunks with {} ...", hash.name());
let mut hashes = Vec::with_capacity(chunks.len());
let hash_time = Duration::span(|| for &(pos, len) in &chunks {
hashes.push(hash.hash(&data[pos..pos + len]))
@ -128,8 +128,8 @@ pub fn run(
let (_, len) = chunks.remove(*i);
dup_size += len;
}
println!(
"- {} duplicate chunks, {}, {:.1}% saved",
tr_println!(
"- {} duplicate chunks, {}, {:.1}% saved by internal deduplication",
dups.len(),
to_file_size(dup_size as u64),
dup_size as f32 / size as f32 * 100.0
@ -141,7 +141,7 @@ pub fn run(
if let Some(compression) = compression.clone() {
println!();
println!("Compressing chunks with {} ...", compression.to_string());
tr_println!("Compressing chunks with {} ...", compression.to_string());
let compress_time = Duration::span(|| {
let mut bundle = Vec::with_capacity(bundle_size + 2 * chunk_size_avg as usize);
let mut c = compression.compress_stream().unwrap();
@ -164,7 +164,7 @@ pub fn run(
to_speed(size, compress_time)
);
let compressed_size = bundles.iter().map(|b| b.len()).sum::<usize>();
println!(
tr_println!(
"- {} bundles, {}, {:.1}% saved",
bundles.len(),
to_file_size(compressed_size as u64),
@ -191,7 +191,7 @@ pub fn run(
crypto.add_secret_key(public, secret);
let encryption = (EncryptionMethod::Sodium, public[..].to_vec().into());
println!("Encrypting bundles...");
tr_println!("Encrypting bundles...");
let mut encrypted_bundles = Vec::with_capacity(bundles.len());
let encrypt_time = Duration::span(|| for bundle in bundles {
@ -206,7 +206,7 @@ pub fn run(
println!();
println!("Decrypting bundles...");
tr_println!("Decrypting bundles...");
bundles = Vec::with_capacity(encrypted_bundles.len());
let decrypt_time = Duration::span(|| for bundle in encrypted_bundles {
bundles.push(crypto.decrypt(&encryption, &bundle).unwrap());
@ -222,7 +222,7 @@ pub fn run(
if let Some(compression) = compression {
println!();
println!("Decompressing bundles with {} ...", compression.to_string());
tr_println!("Decompressing bundles with {} ...", compression.to_string());
let mut dummy = ChunkSink {
chunks: vec![],
written: 0,
@ -243,17 +243,17 @@ pub fn run(
println!();
println!(
tr_println!(
"Total storage size: {} / {}, ratio: {:.1}%",
to_file_size(size as u64),
to_file_size(total_size as u64),
size as f32 / total_size as f32 * 100.0
);
println!(
tr_println!(
"Total processing speed: {}",
to_speed(total_size, total_write_time)
);
println!(
tr_println!(
"Total read speed: {}",
to_speed(total_size, total_read_time)
);

View File

@ -2,9 +2,10 @@ use prelude::*;
use super::*;
use std::path::{Path, PathBuf};
use log::LogLevel;
use log;
use clap::{App, AppSettings, Arg, SubCommand};
#[allow(option_option)]
pub enum Arguments {
Init {
repo_path: PathBuf,
@ -40,6 +41,12 @@ pub enum Arguments {
inode: Option<String>,
force: bool
},
Duplicates {
repo_path: PathBuf,
backup_name: String,
inode: Option<String>,
min_size: u64
},
Prune {
repo_path: PathBuf,
prefix: String,
@ -74,6 +81,9 @@ pub enum Arguments {
backup_name: Option<String>,
inode: Option<String>
},
Statistics {
repo_path: PathBuf
},
Copy {
repo_path_src: PathBuf,
backup_name_src: String,
@ -156,10 +166,10 @@ fn parse_repo_path(
let mut parts = repo_path.splitn(3, "::");
let repo = convert_repo_path(parts.next().unwrap_or(""));
if existing && !repo.join("config.yaml").exists() {
return Err("The specified repository does not exist".to_string());
return Err(tr!("The specified repository does not exist").to_string());
}
if !existing && repo.exists() {
return Err("The specified repository already exists".to_string());
return Err(tr!("The specified repository already exists").to_string());
}
let mut backup = parts.next();
if let Some(val) = backup {
@ -175,18 +185,18 @@ fn parse_repo_path(
}
if let Some(restr) = backup_restr {
if !restr && backup.is_some() {
return Err("No backup may be given here".to_string());
return Err(tr!("No backup may be given here").to_string());
}
if restr && backup.is_none() {
return Err("A backup must be specified".to_string());
return Err(tr!("A backup must be specified").to_string());
}
}
if let Some(restr) = path_restr {
if !restr && path.is_some() {
return Err("No subpath may be given here".to_string());
return Err(tr!("No subpath may be given here").to_string());
}
if restr && path.is_none() {
return Err("A subpath must be specified".to_string());
return Err(tr!("A subpath must be specified").to_string());
}
}
Ok((repo, backup, path))
@ -202,11 +212,36 @@ fn validate_repo_path(
parse_repo_path(&repo_path, existing, backup_restr, path_restr).map(|_| ())
}
fn parse_filesize(num: &str) -> Result<u64, String> {
let (num, suffix) = if !num.is_empty() {
num.split_at(num.len() - 1)
} else {
(num, "b")
};
let factor = match suffix {
"b" | "B" => 1,
"k" | "K" => 1024,
"m" | "M" => 1024*1024,
"g" | "G" => 1024*1024*1024,
"t" | "T" => 1024*1024*1024*1024,
_ => return Err(tr!("Unknown suffix").to_string())
};
let num = try!(parse_num(num));
Ok(num * factor)
}
#[allow(unknown_lints, needless_pass_by_value)]
fn validate_filesize(val: String) -> Result<(), String> {
parse_filesize(&val).map(|_| ())
}
fn parse_num(num: &str) -> Result<u64, String> {
if let Ok(num) = num.parse::<u64>() {
Ok(num)
} else {
Err("Must be a number".to_string())
Err(tr!("Must be a number").to_string())
}
}
@ -219,7 +254,7 @@ fn parse_chunker(val: &str) -> Result<ChunkerType, String> {
if let Ok(chunker) = ChunkerType::from_string(val) {
Ok(chunker)
} else {
Err("Invalid chunker method/size".to_string())
Err(tr!("Invalid chunker method/size").to_string())
}
}
@ -235,7 +270,7 @@ fn parse_compression(val: &str) -> Result<Option<Compression>, String> {
if let Ok(compression) = Compression::from_string(val) {
Ok(Some(compression))
} else {
Err("Invalid compression method/level".to_string())
Err(tr!("Invalid compression method/level").to_string())
}
}
@ -251,13 +286,13 @@ fn parse_public_key(val: &str) -> Result<Option<PublicKey>, String> {
let bytes = match parse_hex(val) {
Ok(bytes) => bytes,
Err(_) => {
return Err("Invalid hexadecimal".to_string());
return Err(tr!("Invalid hexadecimal").to_string());
}
};
if let Some(key) = PublicKey::from_slice(&bytes) {
Ok(Some(key))
} else {
return Err("Invalid key".to_string());
return Err(tr!("Invalid key").to_string());
}
}
@ -270,7 +305,7 @@ fn parse_hash(val: &str) -> Result<HashMethod, String> {
if let Ok(hash) = HashMethod::from(val) {
Ok(hash)
} else {
Err("Invalid hash method".to_string())
Err(tr!("Invalid hash method").to_string())
}
}
@ -283,7 +318,7 @@ fn parse_bundle_id(val: &str) -> Result<BundleId, ErrorCode> {
if let Ok(hash) = Hash::from_string(val) {
Ok(BundleId(hash))
} else {
error!("Invalid bundle id: {}", val);
tr_error!("Invalid bundle id: {}", val);
Err(ErrorCode::InvalidArgs)
}
}
@ -291,7 +326,7 @@ fn parse_bundle_id(val: &str) -> Result<BundleId, ErrorCode> {
#[allow(unknown_lints, needless_pass_by_value)]
fn validate_existing_path(val: String) -> Result<(), String> {
if !Path::new(&val).exists() {
Err("Path does not exist".to_string())
Err(tr!("Path does not exist").to_string())
} else {
Ok(())
}
@ -300,7 +335,7 @@ fn validate_existing_path(val: String) -> Result<(), String> {
#[allow(unknown_lints, needless_pass_by_value)]
fn validate_existing_path_or_stdio(val: String) -> Result<(), String> {
if val != "-" && !Path::new(&val).exists() {
Err("Path does not exist".to_string())
Err(tr!("Path does not exist").to_string())
} else {
Ok(())
}
@ -308,154 +343,292 @@ fn validate_existing_path_or_stdio(val: String) -> Result<(), String> {
#[allow(unknown_lints, cyclomatic_complexity)]
pub fn parse() -> Result<(LogLevel, Arguments), ErrorCode> {
let args = App::new("zvault").version(crate_version!()).author(crate_authors!(",\n")).about(crate_description!())
pub fn parse() -> Result<(log::Level, Arguments), ErrorCode> {
let args = App::new("zvault")
.version(crate_version!())
.author(crate_authors!(",\n"))
.about(crate_description!())
.settings(&[AppSettings::VersionlessSubcommands, AppSettings::SubcommandRequiredElseHelp])
.global_settings(&[AppSettings::AllowMissingPositional, AppSettings::UnifiedHelpMessage, AppSettings::ColoredHelp, AppSettings::ColorAuto])
.arg(Arg::from_usage("-v --verbose 'Print more information'").global(true).multiple(true).max_values(3).takes_value(false))
.arg(Arg::from_usage("-q --quiet 'Print less information'").global(true).conflicts_with("verbose"))
.subcommand(SubCommand::with_name("init").about("Initialize a new repository")
.arg(Arg::from_usage("[bundle_size] --bundle-size [SIZE] 'Set the target bundle size in MiB'")
.default_value(DEFAULT_BUNDLE_SIZE_STR).validator(validate_num))
.arg(Arg::from_usage("--chunker [CHUNKER] 'Set the chunker algorithm and target chunk size'")
.default_value(DEFAULT_CHUNKER).validator(validate_chunker))
.arg(Arg::from_usage("-c --compression [COMPRESSION] 'Set the compression method and level'")
.default_value(DEFAULT_COMPRESSION).validator(validate_compression))
.arg(Arg::from_usage("-e --encrypt 'Generate a keypair and enable encryption'"))
.arg(Arg::from_usage("--hash [HASH] 'Set the hash method'")
.default_value(DEFAULT_HASH).validator(validate_hash))
.arg(Arg::from_usage("-r --remote <REMOTE> 'Set the path to the mounted remote storage'")
.validator(validate_existing_path))
.arg(Arg::from_usage("<REPO> 'The path for the new repository'")
.validator(|val| validate_repo_path(val, false, Some(false), Some(false)))))
.subcommand(SubCommand::with_name("backup").about("Create a new backup")
.arg(Arg::from_usage("--full 'Create a full backup without using a reference'"))
.arg(Arg::from_usage("[reference] --ref [REF] 'Base the new backup on this reference'")
.conflicts_with("full"))
.arg(Arg::from_usage("[cross_device] -x --xdev 'Allow to cross filesystem boundaries'"))
.arg(Arg::from_usage("-e --exclude [PATTERN]... 'Exclude this path or file pattern'"))
.arg(Arg::from_usage("[excludes_from] --excludes-from [FILE] 'Read the list of excludes from this file'"))
.arg(Arg::from_usage("[no_default_excludes] --no-default-excludes 'Do not load the default excludes file'"))
.arg(Arg::from_usage("--tar 'Read the source data from a tar file'")
.conflicts_with_all(&["reference", "exclude", "excludes_from"]))
.arg(Arg::from_usage("<SRC> 'Source path to backup'")
.validator(validate_existing_path_or_stdio))
.arg(Arg::from_usage("<BACKUP> 'Backup path, [repository]::backup'")
.validator(|val| validate_repo_path(val, true, Some(true), Some(false)))))
.subcommand(SubCommand::with_name("restore").about("Restore a backup or subtree")
.arg(Arg::from_usage("--tar 'Restore in form of a tar file'"))
.arg(Arg::from_usage("<BACKUP> 'The backup/subtree path, [repository]::backup[::subtree]'")
.validator(|val| validate_repo_path(val, true, Some(true), None)))
.arg(Arg::from_usage("<DST> 'Destination path for backup'")))
.subcommand(SubCommand::with_name("remove").aliases(&["rm", "delete", "del"]).about("Remove a backup or a subtree")
.arg(Arg::from_usage("-f --force 'Remove multiple backups in a backup folder'"))
.arg(Arg::from_usage("<BACKUP> 'The backup/subtree path, [repository]::backup[::subtree]'")
.validator(|val| validate_repo_path(val, true, Some(true), None))))
.subcommand(SubCommand::with_name("prune").about("Remove backups based on age")
.arg(Arg::from_usage("-p --prefix [PREFIX] 'Only consider backups starting with this prefix'"))
.arg(Arg::from_usage("-d --daily [NUM] 'Keep this number of daily backups'")
.default_value("0").validator(validate_num))
.arg(Arg::from_usage("-w --weekly [NUM] 'Keep this number of weekly backups'")
.default_value("0").validator(validate_num))
.arg(Arg::from_usage("-m --monthly [NUM] 'Keep this number of monthly backups'")
.default_value("0").validator(validate_num))
.arg(Arg::from_usage("-y --yearly [NUM] 'Keep this number of yearly backups'")
.default_value("0").validator(validate_num))
.arg(Arg::from_usage("-f --force 'Actually run the prune instead of simulating it'"))
.arg(Arg::from_usage("<REPO> 'Path of the repository'")
.validator(|val| validate_repo_path(val, true, Some(false), Some(false)))))
.subcommand(SubCommand::with_name("vacuum").about("Reclaim space by rewriting bundles")
.arg(Arg::from_usage("-r --ratio [NUM] 'Ratio in % of unused space in a bundle to rewrite that bundle'")
.default_value(DEFAULT_VACUUM_RATIO_STR).validator(validate_num))
.arg(Arg::from_usage("--combine 'Combine small bundles into larger ones'"))
.arg(Arg::from_usage("-f --force 'Actually run the vacuum instead of simulating it'"))
.arg(Arg::from_usage("<REPO> 'Path of the repository'")
.validator(|val| validate_repo_path(val, true, Some(false), Some(false)))))
.subcommand(SubCommand::with_name("check").about("Check the repository, a backup or a backup subtree")
.arg(Arg::from_usage("-b --bundles 'Check the bundles'"))
.arg(Arg::from_usage("[bundle_data] --bundle-data 'Check bundle contents (slow)'").requires("bundles").alias("data"))
.arg(Arg::from_usage("-i --index 'Check the chunk index'"))
.arg(Arg::from_usage("-r --repair 'Try to repair errors'"))
.arg(Arg::from_usage("<PATH> 'Path of the repository/backup/subtree, [repository][::backup[::subtree]]'")
.validator(|val| validate_repo_path(val, true, None, None))))
.subcommand(SubCommand::with_name("list").alias("ls").about("List backups or backup contents")
.arg(Arg::from_usage("<PATH> 'Path of the repository/backup/subtree, [repository][::backup[::subtree]]'")
.validator(|val| validate_repo_path(val, true, None, None))))
.subcommand(SubCommand::with_name("mount").about("Mount the repository, a backup or a subtree")
.arg(Arg::from_usage("<PATH> 'Path of the repository/backup/subtree, [repository][::backup[::subtree]]'")
.validator(|val| validate_repo_path(val, true, None, None)))
.arg(Arg::from_usage("<MOUNTPOINT> 'Existing mount point'")
.validator(validate_existing_path)))
.subcommand(SubCommand::with_name("bundlelist").about("List bundles in a repository")
.arg(Arg::from_usage("<REPO> 'Path of the repository'")
.validator(|val| validate_repo_path(val, true, Some(false), Some(false)))))
.subcommand(SubCommand::with_name("bundleinfo").about("Display information on a bundle")
.arg(Arg::from_usage("<REPO> 'Path of the repository'")
.validator(|val| validate_repo_path(val, true, Some(false), Some(false))))
.arg(Arg::from_usage("<BUNDLE> 'Id of the bundle'")))
.subcommand(SubCommand::with_name("import").about("Reconstruct a repository from the remote storage")
.arg(Arg::from_usage("-k --key [FILE]... 'Key file needed to read the bundles'"))
.arg(Arg::from_usage("<REMOTE> 'Remote repository path'")
.validator(validate_existing_path))
.arg(Arg::from_usage("<REPO> 'The path for the new repository'")
.validator(|val| validate_repo_path(val, false, Some(false), Some(false)))))
.subcommand(SubCommand::with_name("info").about("Display information on a repository, a backup or a subtree")
.arg(Arg::from_usage("<PATH> 'Path of the repository/backup/subtree, [repository][::backup[::subtree]]'")
.validator(|val| validate_repo_path(val, true, None, None))))
.subcommand(SubCommand::with_name("analyze").about("Analyze the used and reclaimable space of bundles")
.arg(Arg::from_usage("<REPO> 'Path of the repository'")
.validator(|val| validate_repo_path(val, true, Some(false), Some(false)))))
.subcommand(SubCommand::with_name("versions").about("Find different versions of a file in all backups")
.arg(Arg::from_usage("<REPO> 'Path of the repository'")
.validator(|val| validate_repo_path(val, true, Some(false), Some(false))))
.arg(Arg::from_usage("<PATH> 'Path of the file'")))
.subcommand(SubCommand::with_name("diff").about("Display differences between two backup versions")
.arg(Arg::from_usage("<OLD> 'Old version, [repository]::backup[::subpath]'")
.validator(|val| validate_repo_path(val, true, Some(true), None)))
.arg(Arg::from_usage("<NEW> 'New version, [repository]::backup[::subpath]'")
.validator(|val| validate_repo_path(val, true, Some(true), None))))
.subcommand(SubCommand::with_name("copy").alias("cp").about("Create a copy of a backup")
.arg(Arg::from_usage("<SRC> 'Existing backup, [repository]::backup'")
.validator(|val| validate_repo_path(val, true, Some(true), Some(false))))
.arg(Arg::from_usage("<DST> 'Destination backup, [repository]::backup'")
.validator(|val| validate_repo_path(val, true, Some(true), Some(false)))))
.subcommand(SubCommand::with_name("config").about("Display or change the configuration")
.arg(Arg::from_usage("[bundle_size] --bundle-size [SIZE] 'Set the target bundle size in MiB'")
.arg(Arg::from_usage("-v --verbose")
.help(tr!("Print more information"))
.global(true)
.multiple(true)
.max_values(3)
.takes_value(false))
.arg(Arg::from_usage("-q --quiet")
.help(tr!("Print less information"))
.global(true)
.conflicts_with("verbose"))
.subcommand(SubCommand::with_name("init")
.about(tr!("Initialize a new repository"))
.arg(Arg::from_usage("[bundle_size] --bundle-size [SIZE]")
.help(tr!("Set the target bundle size in MiB"))
.default_value(DEFAULT_BUNDLE_SIZE_STR)
.validator(validate_num))
.arg(Arg::from_usage("--chunker [CHUNKER] 'Set the chunker algorithm and target chunk size'")
.arg(Arg::from_usage("--chunker [CHUNKER]")
.help(tr!("Set the chunker algorithm and target chunk size"))
.default_value(DEFAULT_CHUNKER)
.validator(validate_chunker))
.arg(Arg::from_usage("-c --compression [COMPRESSION] 'Set the compression method and level'")
.arg(Arg::from_usage("-c --compression [COMPRESSION]")
.help(tr!("Set the compression method and level"))
.default_value(DEFAULT_COMPRESSION)
.validator(validate_compression))
.arg(Arg::from_usage("-e --encryption [PUBLIC_KEY] 'The public key to use for encryption'")
.validator(validate_public_key))
.arg(Arg::from_usage("--hash [HASH] 'Set the hash method'")
.arg(Arg::from_usage("-e --encrypt")
.help(tr!("Generate a keypair and enable encryption")))
.arg(Arg::from_usage("--hash [HASH]")
.help(tr!("Set the hash method'"))
.default_value(DEFAULT_HASH)
.validator(validate_hash))
.arg(Arg::from_usage("<REPO> 'Path of the repository'")
.validator(|val| validate_repo_path(val, true, Some(false), Some(false)))))
.subcommand(SubCommand::with_name("genkey").about("Generate a new key pair")
.arg(Arg::from_usage("-p --password [PASSWORD] 'Derive the key pair from the given password'"))
.arg(Arg::from_usage("[FILE] 'Destination file for the keypair'")))
.subcommand(SubCommand::with_name("addkey").about("Add a key pair to the repository")
.arg(Arg::from_usage("-g --generate 'Generate a new key pair'")
.conflicts_with("FILE"))
.arg(Arg::from_usage("[set_default] --default -d 'Set the key pair as default'"))
.arg(Arg::from_usage("-p --password [PASSWORD] 'Derive the key pair from the given password'")
.requires("generate"))
.arg(Arg::from_usage("[FILE] 'File containing the keypair'")
.arg(Arg::from_usage("-r --remote <REMOTE>")
.help(tr!("Set the path to the mounted remote storage"))
.validator(validate_existing_path))
.arg(Arg::from_usage("<REPO> 'Path of the repository'")
.arg(Arg::from_usage("<REPO>")
.help(tr!("The path for the new repository"))
.validator(|val| validate_repo_path(val, false, Some(false), Some(false)))))
.subcommand(SubCommand::with_name("backup")
.about(tr!("Create a new backup"))
.arg(Arg::from_usage("--full")
.help(tr!("Create a full backup without using a reference")))
.arg(Arg::from_usage("[reference] --ref [REF]")
.help(tr!("Base the new backup on this reference"))
.conflicts_with("full"))
.arg(Arg::from_usage("[cross_device] -x --xdev")
.help(tr!("Allow to cross filesystem boundaries")))
.arg(Arg::from_usage("-e --exclude [PATTERN]...")
.help(tr!("Exclude this path or file pattern")))
.arg(Arg::from_usage("[excludes_from] --excludes-from [FILE]")
.help(tr!("Read the list of excludes from this file")))
.arg(Arg::from_usage("[no_default_excludes] --no-default-excludes")
.help(tr!("Do not load the default excludes file")))
.arg(Arg::from_usage("--tar")
.help(tr!("Read the source data from a tar file"))
.conflicts_with_all(&["reference", "exclude", "excludes_from"]))
.arg(Arg::from_usage("<SRC>")
.help(tr!("Source path to backup"))
.validator(validate_existing_path_or_stdio))
.arg(Arg::from_usage("<BACKUP>")
.help(tr!("Backup path, [repository]::backup"))
.validator(|val| validate_repo_path(val, true, Some(true), Some(false)))))
.subcommand(SubCommand::with_name("restore")
.about(tr!("Restore a backup or subtree"))
.arg(Arg::from_usage("--tar")
.help(tr!("Restore in form of a tar file")))
.arg(Arg::from_usage("<BACKUP>")
.help(tr!("The backup/subtree path, [repository]::backup[::subtree]"))
.validator(|val| validate_repo_path(val, true, Some(true), None)))
.arg(Arg::from_usage("<DST>")
.help(tr!("Destination path for backup"))))
.subcommand(SubCommand::with_name("remove")
.aliases(&["rm", "delete", "del"])
.about(tr!("Remove a backup or a subtree"))
.arg(Arg::from_usage("-f --force")
.help(tr!("Remove multiple backups in a backup folder")))
.arg(Arg::from_usage("<BACKUP>")
.help(tr!("The backup/subtree path, [repository]::backup[::subtree]"))
.validator(|val| validate_repo_path(val, true, Some(true), None))))
.subcommand(SubCommand::with_name("prune")
.about(tr!("Remove backups based on age"))
.arg(Arg::from_usage("-p --prefix [PREFIX]")
.help(tr!("Only consider backups starting with this prefix")))
.arg(Arg::from_usage("-d --daily [NUM]")
.help(tr!("Keep this number of daily backups"))
.default_value("0")
.validator(validate_num))
.arg(Arg::from_usage("-w --weekly [NUM]")
.help(tr!("Keep this number of weekly backups"))
.default_value("0")
.validator(validate_num))
.arg(Arg::from_usage("-m --monthly [NUM]")
.help(tr!("Keep this number of monthly backups"))
.default_value("0")
.validator(validate_num))
.arg(Arg::from_usage("-y --yearly [NUM]")
.help(tr!("Keep this number of yearly backups"))
.default_value("0")
.validator(validate_num))
.arg(Arg::from_usage("-f --force")
.help(tr!("Actually run the prune instead of simulating it")))
.arg(Arg::from_usage("<REPO>")
.help(tr!("Path of the repository"))
.validator(|val| validate_repo_path(val, true, Some(false), Some(false)))))
.subcommand(SubCommand::with_name("algotest").about("Test a specific algorithm combination")
.arg(Arg::from_usage("[bundle_size] --bundle-size [SIZE] 'Set the target bundle size in MiB'")
.default_value(DEFAULT_BUNDLE_SIZE_STR).validator(validate_num))
.arg(Arg::from_usage("--chunker [CHUNKER] 'Set the chunker algorithm and target chunk size'")
.default_value(DEFAULT_CHUNKER).validator(validate_chunker))
.arg(Arg::from_usage("-c --compression [COMPRESSION] 'Set the compression method and level'")
.default_value(DEFAULT_COMPRESSION).validator(validate_compression))
.arg(Arg::from_usage("-e --encrypt 'Generate a keypair and enable encryption'"))
.arg(Arg::from_usage("--hash [HASH] 'Set the hash method'")
.default_value(DEFAULT_HASH).validator(validate_hash))
.arg(Arg::from_usage("<FILE> 'File with test data'")
.subcommand(SubCommand::with_name("vacuum")
.about(tr!("Reclaim space by rewriting bundles"))
.arg(Arg::from_usage("-r --ratio [NUM]")
.help(tr!("Ratio in % of unused space in a bundle to rewrite that bundle"))
.default_value(DEFAULT_VACUUM_RATIO_STR).validator(validate_num))
.arg(Arg::from_usage("--combine")
.help(tr!("Combine small bundles into larger ones")))
.arg(Arg::from_usage("-f --force")
.help(tr!("Actually run the vacuum instead of simulating it")))
.arg(Arg::from_usage("<REPO>")
.help(tr!("Path of the repository"))
.validator(|val| validate_repo_path(val, true, Some(false), Some(false)))))
.subcommand(SubCommand::with_name("check")
.about(tr!("Check the repository, a backup or a backup subtree"))
.arg(Arg::from_usage("-b --bundles")
.help(tr!("Check the bundles")))
.arg(Arg::from_usage("[bundle_data] --bundle-data")
.help(tr!("Check bundle contents (slow)"))
.requires("bundles")
.alias("data"))
.arg(Arg::from_usage("-i --index")
.help(tr!("Check the chunk index")))
.arg(Arg::from_usage("-r --repair")
.help(tr!("Try to repair errors")))
.arg(Arg::from_usage("<PATH>")
.help(tr!("Path of the repository/backup/subtree, [repository][::backup[::subtree]]"))
.validator(|val| validate_repo_path(val, true, None, None))))
.subcommand(SubCommand::with_name("list")
.alias("ls")
.about(tr!("List backups or backup contents"))
.arg(Arg::from_usage("<PATH>")
.help(tr!("Path of the repository/backup/subtree, [repository][::backup[::subtree]]"))
.validator(|val| validate_repo_path(val, true, None, None))))
.subcommand(SubCommand::with_name("mount")
.about(tr!("Mount the repository, a backup or a subtree"))
.arg(Arg::from_usage("<PATH>")
.help(tr!("Path of the repository/backup/subtree, [repository][::backup[::subtree]]"))
.validator(|val| validate_repo_path(val, true, None, None)))
.arg(Arg::from_usage("<MOUNTPOINT>")
.help(tr!("Existing mount point"))
.validator(validate_existing_path)))
.subcommand(SubCommand::with_name("bundlelist")
.about(tr!("List bundles in a repository"))
.arg(Arg::from_usage("<REPO>")
.help(tr!("Path of the repository"))
.validator(|val| validate_repo_path(val, true, Some(false), Some(false)))))
.subcommand(SubCommand::with_name("statistics")
.alias("stats")
.about(tr!("Display statistics on a repository"))
.arg(Arg::from_usage("<REPO>")
.help(tr!("Path of the repository"))
.validator(|val| validate_repo_path(val, true, Some(false), Some(false)))))
.subcommand(SubCommand::with_name("bundleinfo")
.about(tr!("Display information on a bundle"))
.arg(Arg::from_usage("<REPO>")
.help(tr!("Path of the repository"))
.validator(|val| validate_repo_path(val, true, Some(false), Some(false))))
.arg(Arg::from_usage("<BUNDLE>")
.help(tr!("Id of the bundle"))))
.subcommand(SubCommand::with_name("import")
.about(tr!("Reconstruct a repository from the remote storage"))
.arg(Arg::from_usage("-k --key [FILE]...")
.help(tr!("Key file needed to read the bundles")))
.arg(Arg::from_usage("<REMOTE>")
.help(tr!("Remote repository path"))
.validator(validate_existing_path))
.arg(Arg::from_usage("<REPO>")
.help(tr!("The path for the new repository"))
.validator(|val| validate_repo_path(val, false, Some(false), Some(false)))))
.subcommand(SubCommand::with_name("info")
.about(tr!("Display information on a repository, a backup or a subtree"))
.arg(Arg::from_usage("<PATH>")
.help(tr!("Path of the repository/backup/subtree, [repository][::backup[::subtree]]"))
.validator(|val| validate_repo_path(val, true, None, None))))
.subcommand(SubCommand::with_name("analyze")
.about(tr!("Analyze the used and reclaimable space of bundles"))
.arg(Arg::from_usage("<REPO>")
.help(tr!("Path of the repository"))
.validator(|val| validate_repo_path(val, true, Some(false), Some(false)))))
.subcommand(SubCommand::with_name("versions")
.about(tr!("Find different versions of a file in all backups"))
.arg(Arg::from_usage("<REPO>")
.help(tr!("Path of the repository"))
.validator(|val| validate_repo_path(val, true, Some(false), Some(false))))
.arg(Arg::from_usage("<PATH>")
.help(tr!("Path of the file"))))
.subcommand(SubCommand::with_name("diff")
.about(tr!("Display differences between two backup versions"))
.arg(Arg::from_usage("<OLD>")
.help(tr!("Old version, [repository]::backup[::subpath]"))
.validator(|val| validate_repo_path(val, true, Some(true), None)))
.arg(Arg::from_usage("<NEW>")
.help(tr!("New version, [repository]::backup[::subpath]"))
.validator(|val| validate_repo_path(val, true, Some(true), None))))
.subcommand(SubCommand::with_name("duplicates")
.aliases(&["dups"])
.about(tr!("Find duplicate files in a backup"))
.arg(Arg::from_usage("[min_size] --min-size [SIZE]")
.help(tr!("Set the minimum file size"))
.default_value(DEFAULT_DUPLICATES_MIN_SIZE_STR)
.validator(validate_filesize))
.arg(Arg::from_usage("<BACKUP>")
.help(tr!("The backup/subtree path, [repository]::backup[::subtree]"))
.validator(|val| validate_repo_path(val, true, Some(true), None))))
.subcommand(SubCommand::with_name("copy")
.alias("cp")
.about(tr!("Create a copy of a backup"))
.arg(Arg::from_usage("<SRC>")
.help(tr!("Existing backup, [repository]::backup"))
.validator(|val| validate_repo_path(val, true, Some(true), Some(false))))
.arg(Arg::from_usage("<DST>")
.help(tr!("Destination backup, [repository]::backup"))
.validator(|val| validate_repo_path(val, true, Some(true), Some(false)))))
.subcommand(SubCommand::with_name("config")
.about(tr!("Display or change the configuration"))
.arg(Arg::from_usage("[bundle_size] --bundle-size [SIZE]")
.help(tr!("Set the target bundle size in MiB"))
.validator(validate_num))
.arg(Arg::from_usage("--chunker [CHUNKER]")
.help(tr!("Set the chunker algorithm and target chunk size"))
.validator(validate_chunker))
.arg(Arg::from_usage("-c --compression [COMPRESSION]")
.help(tr!("Set the compression method and level"))
.validator(validate_compression))
.arg(Arg::from_usage("-e --encryption [PUBLIC_KEY]")
.help(tr!("The public key to use for encryption"))
.validator(validate_public_key))
.arg(Arg::from_usage("--hash [HASH]")
.help(tr!("Set the hash method"))
.validator(validate_hash))
.arg(Arg::from_usage("<REPO>")
.help(tr!("Path of the repository"))
.validator(|val| validate_repo_path(val, true, Some(false), Some(false)))))
.subcommand(SubCommand::with_name("genkey")
.about(tr!("Generate a new key pair"))
.arg(Arg::from_usage("-p --password [PASSWORD]")
.help(tr!("Derive the key pair from the given password")))
.arg(Arg::from_usage("[FILE]")
.help(tr!("Destination file for the keypair"))))
.subcommand(SubCommand::with_name("addkey")
.about(tr!("Add a key pair to the repository"))
.arg(Arg::from_usage("-g --generate")
.help(tr!("Generate a new key pair"))
.conflicts_with("FILE"))
.arg(Arg::from_usage("[set_default] --default -d")
.help(tr!("Set the key pair as default")))
.arg(Arg::from_usage("-p --password [PASSWORD]")
.help(tr!("Derive the key pair from the given password"))
.requires("generate"))
.arg(Arg::from_usage("[FILE]")
.help(tr!("File containing the keypair"))
.validator(validate_existing_path))
.arg(Arg::from_usage("<REPO>")
.help(tr!("Path of the repository"))
.validator(|val| validate_repo_path(val, true, Some(false), Some(false)))))
.subcommand(SubCommand::with_name("algotest")
.about(tr!("Test a specific algorithm combination"))
.arg(Arg::from_usage("[bundle_size] --bundle-size [SIZE]")
.help(tr!("Set the target bundle size in MiB"))
.default_value(DEFAULT_BUNDLE_SIZE_STR)
.validator(validate_num))
.arg(Arg::from_usage("--chunker [CHUNKER]")
.help(tr!("Set the chunker algorithm and target chunk size"))
.default_value(DEFAULT_CHUNKER)
.validator(validate_chunker))
.arg(Arg::from_usage("-c --compression [COMPRESSION]")
.help(tr!("Set the compression method and level"))
.default_value(DEFAULT_COMPRESSION)
.validator(validate_compression))
.arg(Arg::from_usage("-e --encrypt")
.help(tr!("Generate a keypair and enable encryption")))
.arg(Arg::from_usage("--hash [HASH]")
.help(tr!("Set the hash method"))
.default_value(DEFAULT_HASH)
.validator(validate_hash))
.arg(Arg::from_usage("<FILE>")
.help(tr!("File with test data"))
.validator(validate_existing_path))).get_matches();
let verbose_count = args.subcommand()
.1
@ -466,10 +639,10 @@ pub fn parse() -> Result<(LogLevel, Arguments), ErrorCode> {
.map(|m| m.occurrences_of("quiet"))
.unwrap_or(0) + args.occurrences_of("quiet");
let log_level = match 1 + verbose_count - quiet_count {
0 => LogLevel::Warn,
1 => LogLevel::Info,
2 => LogLevel::Debug,
_ => LogLevel::Trace,
0 => log::Level::Warn,
1 => log::Level::Info,
2 => log::Level::Debug,
_ => log::Level::Trace,
};
let args = match args.subcommand() {
("init", Some(args)) => {
@ -616,6 +789,15 @@ pub fn parse() -> Result<(LogLevel, Arguments), ErrorCode> {
inode: inode.map(|v| v.to_string())
}
}
("statistics", Some(args)) => {
let (repository, _backup, _inode) = parse_repo_path(
args.value_of("REPO").unwrap(),
true,
Some(false),
Some(false)
).unwrap();
Arguments::Statistics { repo_path: repository }
}
("copy", Some(args)) => {
let (repository_src, backup_src, _inode) =
parse_repo_path(args.value_of("SRC").unwrap(), true, Some(true), Some(false))
@ -690,6 +872,18 @@ pub fn parse() -> Result<(LogLevel, Arguments), ErrorCode> {
.unwrap_or_else(|| vec![])
}
}
("duplicates", Some(args)) => {
let (repository, backup, inode) =
parse_repo_path(args.value_of("BACKUP").unwrap(), true, Some(true), None).unwrap();
Arguments::Duplicates {
repo_path: repository,
backup_name: backup.unwrap().to_string(),
inode: inode.map(|v| v.to_string()),
min_size: args.value_of("min_size").map(|v| {
parse_filesize(v).unwrap()
}).unwrap()
}
}
("config", Some(args)) => {
let (repository, _backup, _inode) = parse_repo_path(
args.value_of("REPO").unwrap(),
@ -744,7 +938,7 @@ pub fn parse() -> Result<(LogLevel, Arguments), ErrorCode> {
}
}
_ => {
error!("No subcommand given");
tr_error!("No subcommand given");
return Err(ErrorCode::InvalidArgs);
}
};

View File

@ -1,52 +1,45 @@
use log::{self, LogRecord, LogLevel, LogMetadata};
use log;
pub use log::SetLoggerError;
use ansi_term::{Color, Style};
use std::io::Write;
macro_rules! println_stderr(
($($arg:tt)*) => { {
let r = writeln!(&mut ::std::io::stderr(), $($arg)*);
r.expect("failed printing to stderr");
} }
);
struct Logger(LogLevel);
struct Logger(log::Level);
impl log::Log for Logger {
fn enabled(&self, metadata: &LogMetadata) -> bool {
fn enabled(&self, metadata: &log::Metadata) -> bool {
metadata.level() <= self.0
}
fn log(&self, record: &LogRecord) {
fn flush(&self) {}
fn log(&self, record: &log::Record) {
if self.enabled(record.metadata()) {
match record.level() {
LogLevel::Error => {
println_stderr!("{}: {}", Color::Red.bold().paint("error"), record.args())
log::Level::Error => {
eprintln!("{}: {}", Color::Red.bold().paint("error"), record.args())
}
LogLevel::Warn => {
println_stderr!(
log::Level::Warn => {
eprintln!(
"{}: {}",
Color::Yellow.bold().paint("warning"),
record.args()
)
}
LogLevel::Info => {
println_stderr!("{}: {}", Color::Green.bold().paint("info"), record.args())
log::Level::Info => {
eprintln!("{}: {}", Color::Green.bold().paint("info"), record.args())
}
LogLevel::Debug => {
println_stderr!("{}: {}", Style::new().bold().paint("debug"), record.args())
log::Level::Debug => {
eprintln!("{}: {}", Style::new().bold().paint("debug"), record.args())
}
LogLevel::Trace => println_stderr!("{}: {}", "trace", record.args()),
log::Level::Trace => eprintln!("{}: {}", "trace", record.args()),
}
}
}
}
pub fn init(level: LogLevel) -> Result<(), SetLoggerError> {
log::set_logger(|max_log_level| {
max_log_level.set(level.to_log_level_filter());
Box::new(Logger(level))
})
pub fn init(level: log::Level) -> Result<(), SetLoggerError> {
let logger = Logger(level);
log::set_max_level(level.to_level_filter());
log::set_boxed_logger(Box::new(logger))
}

View File

@ -45,7 +45,8 @@ pub enum ErrorCode {
DiffRun,
VersionsRun,
ImportRun,
FuseMount
FuseMount,
DuplicatesRun
}
impl ErrorCode {
pub fn code(&self) -> i32 {
@ -81,6 +82,7 @@ impl ErrorCode {
ErrorCode::VersionsRun => 22,
ErrorCode::ImportRun => 23,
ErrorCode::FuseMount => 24,
ErrorCode::DuplicatesRun => 27,
//
ErrorCode::NoSuchBackup => 25,
ErrorCode::BackupAlreadyExists => 26,
@ -89,11 +91,12 @@ impl ErrorCode {
}
pub const DEFAULT_CHUNKER: &'static str = "fastcdc/16";
pub const DEFAULT_HASH: &'static str = "blake2";
pub const DEFAULT_COMPRESSION: &'static str = "brotli/3";
pub const DEFAULT_BUNDLE_SIZE_STR: &'static str = "25";
pub const DEFAULT_VACUUM_RATIO_STR: &'static str = "0";
pub const DEFAULT_CHUNKER: &str = "fastcdc/16";
pub const DEFAULT_HASH: &str = "blake2";
pub const DEFAULT_COMPRESSION: &str = "brotli/3";
pub const DEFAULT_BUNDLE_SIZE_STR: &str = "25";
pub const DEFAULT_VACUUM_RATIO_STR: &str = "0";
pub const DEFAULT_DUPLICATES_MIN_SIZE_STR: &str = "1b";
lazy_static! {
pub static ref ZVAULT_FOLDER: PathBuf = {
env::home_dir().unwrap().join(".zvault")
@ -105,16 +108,16 @@ macro_rules! checked {
match $expr {
Ok(val) => val,
Err(err) => {
error!("Failed to {}\n\tcaused by: {}", $msg, err);
tr_error!("Failed to {}\n\tcaused by: {}", tr!($msg), err);
return Err($code)
}
}
};
}
fn open_repository(path: &Path) -> Result<Repository, ErrorCode> {
fn open_repository(path: &Path, online: bool) -> Result<Repository, ErrorCode> {
Ok(checked!(
Repository::open(path),
Repository::open(path, online),
"load repository",
ErrorCode::LoadRepository
))
@ -122,7 +125,7 @@ fn open_repository(path: &Path) -> Result<Repository, ErrorCode> {
fn get_backup(repo: &Repository, backup_name: &str) -> Result<Backup, ErrorCode> {
if !repo.has_backup(backup_name) {
error!("A backup with that name does not exist");
tr_error!("A backup with that name does not exist");
return Err(ErrorCode::NoSuchBackup);
}
Ok(checked!(
@ -132,6 +135,22 @@ fn get_backup(repo: &Repository, backup_name: &str) -> Result<Backup, ErrorCode>
))
}
fn get_inode(repo: &mut Repository, backup: &Backup, inode: Option<&String>) -> Result<Inode, ErrorCode> {
Ok(if let Some(inode) = inode {
checked!(
repo.get_backup_inode(backup, &inode),
"load subpath inode",
ErrorCode::LoadInode
)
} else {
checked!(
repo.get_inode(&backup.root),
"load root inode",
ErrorCode::LoadInode
)
})
}
fn find_reference_backup(
repo: &Repository,
path: &str,
@ -145,11 +164,11 @@ fn find_reference_backup(
Ok(backup_map) => backup_map,
Err(RepositoryError::BackupFile(BackupFileError::PartialBackupsList(backup_map,
_failed))) => {
warn!("Some backups could not be read, ignoring them");
tr_warn!("Some backups could not be read, ignoring them");
backup_map
}
Err(err) => {
error!("Failed to load backup files: {}", err);
tr_error!("Failed to load backup files: {}", err);
return Err(ErrorCode::LoadBackup);
}
};
@ -164,41 +183,41 @@ fn find_reference_backup(
fn print_backup(backup: &Backup) {
if backup.modified {
warn!("This backup has been modified");
tr_warn!("This backup has been modified");
}
println!(
tr_println!(
"Date: {}",
Local.timestamp(backup.timestamp, 0).to_rfc2822()
);
println!("Source: {}:{}", backup.host, backup.path);
println!("Duration: {}", to_duration(backup.duration));
println!(
tr_println!("Source: {}:{}", backup.host, backup.path);
tr_println!("Duration: {}", to_duration(backup.duration));
tr_println!(
"Entries: {} files, {} dirs",
backup.file_count,
backup.dir_count
);
println!(
tr_println!(
"Total backup size: {}",
to_file_size(backup.total_data_size)
);
println!(
tr_println!(
"Modified data size: {}",
to_file_size(backup.changed_data_size)
);
let dedup_ratio = backup.deduplicated_data_size as f32 / backup.changed_data_size as f32;
println!(
"Deduplicated size: {}, {:.1}% saved",
tr_println!(
"Deduplicated size: {}, {:.1}%",
to_file_size(backup.deduplicated_data_size),
(1.0 - dedup_ratio) * 100.0
(dedup_ratio - 1.0) * 100.0
);
let compress_ratio = backup.encoded_data_size as f32 / backup.deduplicated_data_size as f32;
println!(
"Compressed size: {} in {} bundles, {:.1}% saved",
tr_println!(
"Compressed size: {} in {} bundles, {:.1}%",
to_file_size(backup.encoded_data_size),
backup.bundle_count,
(1.0 - compress_ratio) * 100.0
(compress_ratio - 1.0) * 100.0
);
println!(
tr_println!(
"Chunk count: {}, avg size: {}",
backup.chunk_count,
to_file_size(backup.avg_chunk_size as u64)
@ -246,30 +265,30 @@ pub fn format_inode_one_line(inode: &Inode) -> String {
}
fn print_inode(inode: &Inode) {
println!("Name: {}", inode.name);
println!("Type: {}", inode.file_type);
println!("Size: {}", to_file_size(inode.size));
println!("Permissions: {:3o}", inode.mode);
println!("User: {}", inode.user);
println!("Group: {}", inode.group);
println!(
tr_println!("Name: {}", inode.name);
tr_println!("Type: {}", inode.file_type);
tr_println!("Size: {}", to_file_size(inode.size));
tr_println!("Permissions: {:3o}", inode.mode);
tr_println!("User: {}", inode.user);
tr_println!("Group: {}", inode.group);
tr_println!(
"Timestamp: {}",
Local.timestamp(inode.timestamp, 0).to_rfc2822()
);
if let Some(ref target) = inode.symlink_target {
println!("Symlink target: {}", target);
tr_println!("Symlink target: {}", target);
}
println!("Cumulative size: {}", to_file_size(inode.cum_size));
println!("Cumulative file count: {}", inode.cum_files);
println!("Cumulative directory count: {}", inode.cum_dirs);
tr_println!("Cumulative size: {}", to_file_size(inode.cum_size));
tr_println!("Cumulative file count: {}", inode.cum_files);
tr_println!("Cumulative directory count: {}", inode.cum_dirs);
if let Some(ref children) = inode.children {
println!("Children:");
tr_println!("Children:");
for name in children.keys() {
println!(" - {}", name);
}
}
if !inode.xattrs.is_empty() {
println!("Extended attributes:");
tr_println!("Extended attributes:");
for (key, value) in &inode.xattrs {
if let Ok(value) = str::from_utf8(value) {
println!(" - {} = '{}'", key, value);
@ -296,44 +315,104 @@ fn print_backups(backup_map: &HashMap<String, Backup>) {
}
fn print_repoinfo(info: &RepositoryInfo) {
println!("Bundles: {}", info.bundle_count);
println!("Total size: {}", to_file_size(info.encoded_data_size));
println!("Uncompressed size: {}", to_file_size(info.raw_data_size));
println!("Compression ratio: {:.1}%", info.compression_ratio * 100.0);
println!("Chunk count: {}", info.chunk_count);
println!(
tr_println!("Bundles: {}", info.bundle_count);
tr_println!("Total size: {}", to_file_size(info.encoded_data_size));
tr_println!("Uncompressed size: {}", to_file_size(info.raw_data_size));
tr_println!("Compression ratio: {:.1}%", (info.compression_ratio - 1.0) * 100.0);
tr_println!("Chunk count: {}", info.chunk_count);
tr_println!(
"Average chunk size: {}",
to_file_size(info.avg_chunk_size as u64)
);
let index_usage = info.index_entries as f32 / info.index_capacity as f32;
println!(
tr_println!(
"Index: {}, {:.0}% full",
to_file_size(info.index_size as u64),
index_usage * 100.0
);
}
fn print_repostats(stats: &RepositoryStatistics) {
tr_println!("Index\n=====");
let index_usage = stats.index.count as f32 / stats.index.capacity as f32;
tr_println!("Size: {}", to_file_size(stats.index.size as u64));
tr_println!("Entries: {} / {}, {:.0}%", stats.index.count, stats.index.capacity, index_usage*100.0);
let disp = &stats.index.displacement;
tr_println!("Displacement:\n - average: {:.1}\n - stddev: {:.1}\n - over {:.1}: {:.0}, {:.1}%\n - maximum: {:.0}",
disp.avg, disp.stddev, disp.avg + 2.0 * disp.stddev, disp.count_xl, disp.count_xl as f32 / disp.count as f32 * 100.0, disp.max);
println!();
tr_println!("Bundles\n=======");
let tsize = (stats.bundles.raw_size.count as f32 * stats.bundles.encoded_size.avg) as u64;
tr_println!("All bundles: {} in {} bundles", to_file_size(tsize), stats.bundles.raw_size.count);
let rsize = &stats.bundles.raw_size;
tr_println!(" - raw size: ø = {}, maximum: {}", to_file_size(rsize.avg as u64), to_file_size(rsize.max as u64));
let esize = &stats.bundles.encoded_size;
tr_println!(" - encoded size: ø = {}, maximum: {}", to_file_size(esize.avg as u64), to_file_size(esize.max as u64));
let ccount = &stats.bundles.chunk_count;
tr_println!(" - chunk count: ø = {:.1}, maximum: {:.0}", ccount.avg, ccount.max);
let tsize = (stats.bundles.raw_size_meta.count as f32 * stats.bundles.encoded_size_meta.avg) as u64;
tr_println!("Meta bundles: {} in {} bundles", to_file_size(tsize), stats.bundles.raw_size_meta.count);
let rsize = &stats.bundles.raw_size_meta;
tr_println!(" - raw size: ø = {}, maximum: {}", to_file_size(rsize.avg as u64), to_file_size(rsize.max as u64));
let esize = &stats.bundles.encoded_size_meta;
tr_println!(" - encoded size: ø = {}, maximum: {}", to_file_size(esize.avg as u64), to_file_size(esize.max as u64));
let ccount = &stats.bundles.chunk_count_meta;
tr_println!(" - chunk count: ø = {:.1}, maximum: {:.0}", ccount.avg, ccount.max);
let tsize = (stats.bundles.raw_size_data.count as f32 * stats.bundles.encoded_size_data.avg) as u64;
tr_println!("Data bundles: {} in {} bundles", to_file_size(tsize), stats.bundles.raw_size_data.count);
let rsize = &stats.bundles.raw_size_data;
tr_println!(" - raw size: ø = {}, maximum: {}", to_file_size(rsize.avg as u64), to_file_size(rsize.max as u64));
let esize = &stats.bundles.encoded_size_data;
tr_println!(" - encoded size: ø = {}, maximum: {}", to_file_size(esize.avg as u64), to_file_size(esize.max as u64));
let ccount = &stats.bundles.chunk_count_data;
tr_println!(" - chunk count: ø = {:.1}, maximum: {:.0}", ccount.avg, ccount.max);
println!();
tr_println!("Bundle methods\n==============");
tr_println!("Hash:");
for (hash, &count) in &stats.bundles.hash_methods {
tr_println!(" - {}: {}, {:.1}%", hash.name(), count, count as f32 / stats.bundles.raw_size.count as f32 * 100.0);
}
tr_println!("Compression:");
for (compr, &count) in &stats.bundles.compressions {
let compr_name = if let Some(ref compr) = *compr {
compr.to_string()
} else {
tr!("none").to_string()
};
tr_println!(" - {}: {}, {:.1}%", compr_name, count, count as f32 / stats.bundles.raw_size.count as f32 * 100.0);
}
tr_println!("Encryption:");
for (encr, &count) in &stats.bundles.encryptions {
let encr_name = if let Some(ref encr) = *encr {
to_hex(&encr.1[..])
} else {
tr!("none").to_string()
};
tr_println!(" - {}: {}, {:.1}%", encr_name, count, count as f32 / stats.bundles.raw_size.count as f32 * 100.0);
}
}
fn print_bundle(bundle: &StoredBundle) {
println!("Bundle {}", bundle.info.id);
println!(" - Mode: {:?}", bundle.info.mode);
println!(" - Path: {:?}", bundle.path);
println!(
tr_println!("Bundle {}", bundle.info.id);
tr_println!(" - Mode: {:?}", bundle.info.mode);
tr_println!(" - Path: {:?}", bundle.path);
tr_println!(
" - Date: {}",
Local.timestamp(bundle.info.timestamp, 0).to_rfc2822()
);
println!(" - Hash method: {:?}", bundle.info.hash_method);
tr_println!(" - Hash method: {:?}", bundle.info.hash_method);
let encryption = if let Some((_, ref key)) = bundle.info.encryption {
to_hex(key)
} else {
"none".to_string()
};
println!(" - Encryption: {}", encryption);
println!(" - Chunks: {}", bundle.info.chunk_count);
println!(
tr_println!(" - Encryption: {}", encryption);
tr_println!(" - Chunks: {}", bundle.info.chunk_count);
tr_println!(
" - Size: {}",
to_file_size(bundle.info.encoded_size as u64)
);
println!(
tr_println!(
" - Data size: {}",
to_file_size(bundle.info.raw_size as u64)
);
@ -343,15 +422,15 @@ fn print_bundle(bundle: &StoredBundle) {
} else {
"none".to_string()
};
println!(
tr_println!(
" - Compression: {}, ratio: {:.1}%",
compression,
ratio * 100.0
(ratio - 1.0) * 100.0
);
}
fn print_bundle_one_line(bundle: &BundleInfo) {
println!(
tr_println!(
"{}: {:8?}, {:5} chunks, {:8}",
bundle.id,
bundle.mode,
@ -361,19 +440,19 @@ fn print_bundle_one_line(bundle: &BundleInfo) {
}
fn print_config(config: &Config) {
println!("Bundle size: {}", to_file_size(config.bundle_size as u64));
println!("Chunker: {}", config.chunker.to_string());
tr_println!("Bundle size: {}", to_file_size(config.bundle_size as u64));
tr_println!("Chunker: {}", config.chunker.to_string());
if let Some(ref compression) = config.compression {
println!("Compression: {}", compression.to_string());
tr_println!("Compression: {}", compression.to_string());
} else {
println!("Compression: none");
tr_println!("Compression: none");
}
if let Some(ref encryption) = config.encryption {
println!("Encryption: {}", to_hex(&encryption.1[..]));
tr_println!("Encryption: {}", to_hex(&encryption.1[..]));
} else {
println!("Encryption: none");
tr_println!("Encryption: none");
}
println!("Hash method: {}", config.hash.name());
tr_println!("Hash method: {}", config.hash.name());
}
fn print_analysis(analysis: &HashMap<u32, BundleAnalysis>) {
@ -390,17 +469,17 @@ fn print_analysis(analysis: &HashMap<u32, BundleAnalysis>) {
}
}
}
println!("Total bundle size: {}", to_file_size(data_total as u64));
tr_println!("Total bundle size: {}", to_file_size(data_total as u64));
let used = data_total - reclaim_space[10];
println!(
tr_println!(
"Space used: {}, {:.1} %",
to_file_size(used as u64),
used as f32 / data_total as f32 * 100.0
);
println!("Reclaimable space (depending on vacuum ratio)");
tr_println!("Reclaimable space (depending on vacuum ratio)");
#[allow(unknown_lints, needless_range_loop)]
for i in 0..11 {
println!(
tr_println!(
" - ratio={:3}: {:>10}, {:4.1} %, rewriting {:>10}",
i * 10,
to_file_size(reclaim_space[i] as u64),
@ -410,12 +489,23 @@ fn print_analysis(analysis: &HashMap<u32, BundleAnalysis>) {
}
}
fn print_duplicates(dups: Vec<(Vec<PathBuf>, u64)>) {
for (group, size) in dups {
tr_println!("{} duplicates found, size: {}", group.len(), to_file_size(size));
for dup in group {
println!(" - {}", dup.to_string_lossy());
}
println!();
}
}
#[allow(unknown_lints, cyclomatic_complexity)]
pub fn run() -> Result<(), ErrorCode> {
let (log_level, args) = try!(args::parse());
if let Err(err) = logger::init(log_level) {
println!("Failed to initialize the logger: {}", err);
tr_println!("Failed to initialize the logger: {}", err);
return Err(ErrorCode::InitializeLogger);
}
match args {
@ -429,18 +519,18 @@ pub fn run() -> Result<(), ErrorCode> {
remote_path
} => {
if !Path::new(&remote_path).is_absolute() {
error!("The remote path of a repository must be absolute.");
tr_error!("The remote path of a repository must be absolute.");
return Err(ErrorCode::InvalidArgs);
}
let mut repo = checked!(
Repository::create(
repo_path,
Config {
bundle_size: bundle_size,
chunker: chunker,
compression: compression,
&Config {
bundle_size,
chunker,
compression,
encryption: None,
hash: hash
hash
},
remote_path
),
@ -449,9 +539,9 @@ pub fn run() -> Result<(), ErrorCode> {
);
if encryption {
let (public, secret) = Crypto::gen_keypair();
info!("Created the following key pair");
println!("public: {}", to_hex(&public[..]));
println!("secret: {}", to_hex(&secret[..]));
tr_info!("Created the following key pair");
tr_println!("public: {}", to_hex(&public[..]));
tr_println!("secret: {}", to_hex(&secret[..]));
repo.set_encryption(Some(&public));
checked!(
repo.register_key(public, secret),
@ -459,7 +549,7 @@ pub fn run() -> Result<(), ErrorCode> {
ErrorCode::AddKey
);
checked!(repo.save_config(), "save config", ErrorCode::SaveConfig);
warn!(
tr_warn!(
"Please store this key pair in a secure location before using the repository"
);
println!();
@ -478,13 +568,13 @@ pub fn run() -> Result<(), ErrorCode> {
no_default_excludes,
tar
} => {
let mut repo = try!(open_repository(&repo_path));
let mut repo = try!(open_repository(&repo_path, true));
if repo.has_backup(&backup_name) {
error!("A backup with that name already exists");
tr_error!("A backup with that name already exists");
return Err(ErrorCode::BackupAlreadyExists);
}
if src_path == "-" && !tar {
error!("Reading from stdin requires --tar");
tr_error!("Reading from stdin requires --tar");
return Err(ErrorCode::InvalidArgs);
}
let mut reference_backup = None;
@ -500,9 +590,9 @@ pub fn run() -> Result<(), ErrorCode> {
reference_backup = try!(find_reference_backup(&repo, &src_path));
}
if let Some(&(ref name, _)) = reference_backup.as_ref() {
info!("Using backup {} as reference", name);
tr_info!("Using backup {} as reference", name);
} else {
info!("No reference backup found, doing a full scan instead");
tr_info!("No reference backup found, doing a full scan instead");
}
}
let reference_backup = reference_backup.map(|(_, backup)| backup);
@ -559,8 +649,8 @@ pub fn run() -> Result<(), ErrorCode> {
))
};
let options = BackupOptions {
same_device: same_device,
excludes: excludes
same_device,
excludes
};
let result = if tar {
repo.import_tarfile(&src_path)
@ -569,15 +659,15 @@ pub fn run() -> Result<(), ErrorCode> {
};
let backup = match result {
Ok(backup) => {
info!("Backup finished");
tr_info!("Backup finished");
backup
}
Err(RepositoryError::Backup(BackupError::FailedPaths(backup, _failed_paths))) => {
warn!("Some files are missing from the backup");
tr_warn!("Some files are missing from the backup");
backup
}
Err(err) => {
error!("Backup failed: {}", err);
tr_error!("Backup failed: {}", err);
return Err(ErrorCode::BackupRun);
}
};
@ -595,21 +685,9 @@ pub fn run() -> Result<(), ErrorCode> {
dst_path,
tar
} => {
let mut repo = try!(open_repository(&repo_path));
let mut repo = try!(open_repository(&repo_path, true));
let backup = try!(get_backup(&repo, &backup_name));
let inode = if let Some(inode) = inode {
checked!(
repo.get_backup_inode(&backup, &inode),
"load subpath inode",
ErrorCode::LoadInode
)
} else {
checked!(
repo.get_inode(&backup.root),
"load root inode",
ErrorCode::LoadInode
)
};
let inode = try!(get_inode(&mut repo, &backup, inode.as_ref()));
if tar {
checked!(
repo.export_tarfile(&backup, inode, &dst_path),
@ -623,7 +701,7 @@ pub fn run() -> Result<(), ErrorCode> {
ErrorCode::RestoreRun
);
}
info!("Restore finished");
tr_info!("Restore finished");
}
Arguments::Copy {
repo_path_src,
@ -632,12 +710,12 @@ pub fn run() -> Result<(), ErrorCode> {
backup_name_dst
} => {
if repo_path_src != repo_path_dst {
error!("Can only run copy on same repository");
tr_error!("Can only run copy on same repository");
return Err(ErrorCode::InvalidArgs);
}
let mut repo = try!(open_repository(&repo_path_src));
let mut repo = try!(open_repository(&repo_path_src, false));
if repo.has_backup(&backup_name_dst) {
error!("A backup with that name already exists");
tr_error!("A backup with that name already exists");
return Err(ErrorCode::BackupAlreadyExists);
}
let backup = try!(get_backup(&repo, &backup_name_src));
@ -653,7 +731,7 @@ pub fn run() -> Result<(), ErrorCode> {
inode,
force
} => {
let mut repo = try!(open_repository(&repo_path));
let mut repo = try!(open_repository(&repo_path, true));
if let Some(inode) = inode {
let mut backup = try!(get_backup(&repo, &backup_name));
checked!(
@ -666,7 +744,7 @@ pub fn run() -> Result<(), ErrorCode> {
"save backup file",
ErrorCode::SaveBackup
);
info!("The backup subpath has been deleted, run vacuum to reclaim space");
tr_info!("The backup subpath has been deleted, run vacuum to reclaim space");
} else if repo.layout.backups_path().join(&backup_name).is_dir() {
let backups = checked!(
repo.get_backups(&backup_name),
@ -682,7 +760,7 @@ pub fn run() -> Result<(), ErrorCode> {
);
}
} else {
error!("Denying to remove multiple backups (use --force):");
tr_error!("Denying to remove multiple backups (use --force):");
for name in backups.keys() {
println!(" - {}/{}", backup_name, name);
}
@ -693,7 +771,7 @@ pub fn run() -> Result<(), ErrorCode> {
"delete backup",
ErrorCode::RemoveRun
);
info!("The backup has been deleted, run vacuum to reclaim space");
tr_info!("The backup has been deleted, run vacuum to reclaim space");
}
}
Arguments::Prune {
@ -705,9 +783,9 @@ pub fn run() -> Result<(), ErrorCode> {
yearly,
force
} => {
let mut repo = try!(open_repository(&repo_path));
let mut repo = try!(open_repository(&repo_path, true));
if daily + weekly + monthly + yearly == 0 {
error!("This would remove all those backups");
tr_error!("This would remove all those backups");
return Err(ErrorCode::UnsafeArgs);
}
checked!(
@ -716,7 +794,7 @@ pub fn run() -> Result<(), ErrorCode> {
ErrorCode::PruneRun
);
if !force {
info!("Run with --force to actually execute this command");
tr_info!("Run with --force to actually execute this command");
}
}
Arguments::Vacuum {
@ -725,7 +803,7 @@ pub fn run() -> Result<(), ErrorCode> {
force,
combine
} => {
let mut repo = try!(open_repository(&repo_path));
let mut repo = try!(open_repository(&repo_path, true));
let info_before = repo.info();
checked!(
repo.vacuum(ratio, combine, force),
@ -733,10 +811,10 @@ pub fn run() -> Result<(), ErrorCode> {
ErrorCode::VacuumRun
);
if !force {
info!("Run with --force to actually execute this command");
tr_info!("Run with --force to actually execute this command");
} else {
let info_after = repo.info();
info!(
tr_info!(
"Reclaimed {}",
to_file_size(info_before.encoded_data_size - info_after.encoded_data_size)
);
@ -751,7 +829,7 @@ pub fn run() -> Result<(), ErrorCode> {
bundle_data,
repair
} => {
let mut repo = try!(open_repository(&repo_path));
let mut repo = try!(open_repository(&repo_path, true));
checked!(
repo.check_repository(repair),
"check repository",
@ -790,14 +868,14 @@ pub fn run() -> Result<(), ErrorCode> {
)
}
repo.set_clean();
info!("Integrity verified")
tr_info!("Integrity verified")
}
Arguments::List {
repo_path,
backup_name,
inode
} => {
let mut repo = try!(open_repository(&repo_path));
let mut repo = try!(open_repository(&repo_path, false));
let backup_map = if let Some(backup_name) = backup_name {
if repo.layout.backups_path().join(&backup_name).is_dir() {
repo.get_backups(&backup_name)
@ -830,11 +908,11 @@ pub fn run() -> Result<(), ErrorCode> {
let backup_map = match backup_map {
Ok(backup_map) => backup_map,
Err(RepositoryError::BackupFile(BackupFileError::PartialBackupsList(backup_map, _failed))) => {
warn!("Some backups could not be read, ignoring them");
tr_warn!("Some backups could not be read, ignoring them");
backup_map
}
Err(err) => {
error!("Failed to load backup files: {}", err);
tr_error!("Failed to load backup files: {}", err);
return Err(ErrorCode::LoadBackup);
}
};
@ -845,7 +923,7 @@ pub fn run() -> Result<(), ErrorCode> {
backup_name,
inode
} => {
let mut repo = try!(open_repository(&repo_path));
let mut repo = try!(open_repository(&repo_path, false));
if let Some(backup_name) = backup_name {
let backup = try!(get_backup(&repo, &backup_name));
if let Some(inode) = inode {
@ -862,13 +940,35 @@ pub fn run() -> Result<(), ErrorCode> {
print_repoinfo(&repo.info());
}
}
Arguments::Statistics {
repo_path
} => {
let mut repo = try!(open_repository(&repo_path, false));
print_repostats(&repo.statistics());
}
Arguments::Duplicates {
repo_path,
backup_name,
inode,
min_size
} => {
let mut repo = try!(open_repository(&repo_path, true));
let backup = try!(get_backup(&repo, &backup_name));
let inode = try!(get_inode(&mut repo, &backup, inode.as_ref()));
let dups = checked!(
repo.find_duplicates(&inode, min_size),
"find duplicates",
ErrorCode::DuplicatesRun
);
print_duplicates(dups);
}
Arguments::Mount {
repo_path,
backup_name,
inode,
mount_point
} => {
let mut repo = try!(open_repository(&repo_path));
let mut repo = try!(open_repository(&repo_path, true));
let fs = if let Some(backup_name) = backup_name {
if repo.layout.backups_path().join(&backup_name).is_dir() {
checked!(
@ -904,8 +1004,8 @@ pub fn run() -> Result<(), ErrorCode> {
ErrorCode::FuseMount
)
};
info!("Mounting the filesystem...");
info!(
tr_info!("Mounting the filesystem...");
tr_info!(
"Please unmount the filesystem via 'fusermount -u {}' when done.",
mount_point
);
@ -916,7 +1016,7 @@ pub fn run() -> Result<(), ErrorCode> {
);
}
Arguments::Analyze { repo_path } => {
let mut repo = try!(open_repository(&repo_path));
let mut repo = try!(open_repository(&repo_path, true));
print_analysis(&checked!(
repo.analyze_usage(),
"analyze repository",
@ -924,7 +1024,7 @@ pub fn run() -> Result<(), ErrorCode> {
));
}
Arguments::BundleList { repo_path } => {
let repo = try!(open_repository(&repo_path));
let repo = try!(open_repository(&repo_path, true));
for bundle in repo.list_bundles() {
print_bundle_one_line(bundle);
}
@ -933,11 +1033,11 @@ pub fn run() -> Result<(), ErrorCode> {
repo_path,
bundle_id
} => {
let repo = try!(open_repository(&repo_path));
let repo = try!(open_repository(&repo_path, true));
if let Some(bundle) = repo.get_bundle(&bundle_id) {
print_bundle(bundle);
} else {
error!("No such bundle");
tr_error!("No such bundle");
return Err(ErrorCode::LoadBundle);
}
}
@ -951,10 +1051,10 @@ pub fn run() -> Result<(), ErrorCode> {
"import repository",
ErrorCode::ImportRun
);
info!("Import finished");
tr_info!("Import finished");
}
Arguments::Versions { repo_path, path } => {
let mut repo = try!(open_repository(&repo_path));
let mut repo = try!(open_repository(&repo_path, true));
let mut found = false;
for (name, mut inode) in
checked!(
@ -968,7 +1068,7 @@ pub fn run() -> Result<(), ErrorCode> {
found = true;
}
if !found {
info!("No versions of that file were found.");
tr_info!("No versions of that file were found.");
}
}
Arguments::Diff {
@ -980,10 +1080,10 @@ pub fn run() -> Result<(), ErrorCode> {
inode_new
} => {
if repo_path_old != repo_path_new {
error!("Can only run diff on same repository");
tr_error!("Can only run diff on same repository");
return Err(ErrorCode::InvalidArgs);
}
let mut repo = try!(open_repository(&repo_path_old));
let mut repo = try!(open_repository(&repo_path_old, true));
let backup_old = try!(get_backup(&repo, &backup_name_old));
let backup_new = try!(get_backup(&repo, &backup_name_new));
let inode1 =
@ -1015,7 +1115,7 @@ pub fn run() -> Result<(), ErrorCode> {
);
}
if diffs.is_empty() {
info!("No differences found");
tr_info!("No differences found");
}
}
Arguments::Config {
@ -1026,14 +1126,14 @@ pub fn run() -> Result<(), ErrorCode> {
encryption,
hash
} => {
let mut repo = try!(open_repository(&repo_path));
let mut repo = try!(open_repository(&repo_path, false));
let mut changed = false;
if let Some(bundle_size) = bundle_size {
repo.config.bundle_size = bundle_size;
changed = true;
}
if let Some(chunker) = chunker {
warn!(
tr_warn!(
"Changing the chunker makes it impossible to use existing data for deduplication"
);
repo.config.chunker = chunker;
@ -1048,7 +1148,7 @@ pub fn run() -> Result<(), ErrorCode> {
changed = true;
}
if let Some(hash) = hash {
warn!(
tr_warn!(
"Changing the hash makes it impossible to use existing data for deduplication"
);
repo.config.hash = hash;
@ -1056,7 +1156,7 @@ pub fn run() -> Result<(), ErrorCode> {
}
if changed {
checked!(repo.save_config(), "save config", ErrorCode::SaveConfig);
info!("The configuration has been updated.");
tr_info!("The configuration has been updated.");
} else {
print_config(&repo.config);
}
@ -1066,9 +1166,9 @@ pub fn run() -> Result<(), ErrorCode> {
None => Crypto::gen_keypair(),
Some(ref password) => Crypto::keypair_from_password(password),
};
info!("Created the following key pair");
println!("public: {}", to_hex(&public[..]));
println!("secret: {}", to_hex(&secret[..]));
tr_info!("Created the following key pair");
tr_println!("public: {}", to_hex(&public[..]));
tr_println!("secret: {}", to_hex(&secret[..]));
if let Some(file) = file {
checked!(
Crypto::save_keypair_to_file(&public, &secret, file),
@ -1083,7 +1183,7 @@ pub fn run() -> Result<(), ErrorCode> {
password,
file
} => {
let mut repo = try!(open_repository(&repo_path));
let mut repo = try!(open_repository(&repo_path, false));
let (public, secret) = if let Some(file) = file {
checked!(
Crypto::load_keypair_from_file(file),
@ -1091,13 +1191,13 @@ pub fn run() -> Result<(), ErrorCode> {
ErrorCode::LoadKey
)
} else {
info!("Created the following key pair");
tr_info!("Created the following key pair");
let (public, secret) = match password {
None => Crypto::gen_keypair(),
Some(ref password) => Crypto::keypair_from_password(password),
};
println!("public: {}", to_hex(&public[..]));
println!("secret: {}", to_hex(&secret[..]));
tr_println!("public: {}", to_hex(&public[..]));
tr_println!("secret: {}", to_hex(&secret[..]));
(public, secret)
};
checked!(
@ -1108,7 +1208,7 @@ pub fn run() -> Result<(), ErrorCode> {
if set_default {
repo.set_encryption(Some(&public));
checked!(repo.save_config(), "save config", ErrorCode::SaveConfig);
warn!(
tr_warn!(
"Please store this key pair in a secure location before using the repository"
);
}

View File

@ -1,6 +1,3 @@
extern crate mmap;
#[macro_use] extern crate quick_error;
use std::path::Path;
use std::fs::{File, OpenOptions};
use std::mem;
@ -11,6 +8,7 @@ use std::os::unix::io::AsRawFd;
use mmap::{MemoryMap, MapOption, MapError};
use ::prelude::*;
pub const MAX_USAGE: f64 = 0.9;
pub const MIN_USAGE: f64 = 0.35;
@ -23,30 +21,30 @@ quick_error!{
Io(err: io::Error) {
from()
cause(err)
description("Failed to open index file")
display("Index error: failed to open the index file\n\tcaused by: {}", err)
description(tr!("Failed to open index file"))
display("{}", tr_format!("Index error: failed to open the index file\n\tcaused by: {}", err))
}
Mmap(err: MapError) {
from()
cause(err)
description("Failed to memory-map the index file")
display("Index error: failed to memory-map the index file\n\tcaused by: {}", err)
description(tr!("Failed to memory-map the index file"))
display("{}", tr_format!("Index error: failed to memory-map the index file\n\tcaused by: {}", err))
}
WrongMagic {
description("Wrong header")
display("Index error: file has the wrong magic header")
description(tr!("Wrong header"))
display("{}", tr!("Index error: file has the wrong magic header"))
}
UnsupportedVersion(version: u8) {
description("Unsupported version")
display("Index error: index file has unsupported version: {}", version)
description(tr!("Unsupported version"))
display("{}", tr_format!("Index error: index file has unsupported version: {}", version))
}
WrongPosition(should: usize, is: LocateResult) {
description("Key at wrong position")
display("Index error: key has wrong position, expected at: {}, but is at: {:?}", should, is)
description(tr!("Key at wrong position"))
display("{}", tr_format!("Index error: key has wrong position, expected at: {}, but is at: {:?}", should, is))
}
WrongEntryCount(header: usize, actual: usize) {
description("Wrong entry count")
display("Index error: index has wrong entry count, expected {}, but is {}", header, actual)
description(tr!("Wrong entry count"))
display("{}", tr_format!("Index error: index has wrong entry count, expected {}, but is {}", header, actual))
}
}
}
@ -61,35 +59,66 @@ pub struct Header {
}
pub trait Key: Clone + Eq + Copy + Default {
pub trait Key: Eq + Copy + Default {
fn hash(&self) -> u64;
fn is_used(&self) -> bool;
fn clear(&mut self);
}
pub trait Value: Clone + Copy + Default {}
pub trait Value: Copy + Default {}
#[repr(packed)]
#[derive(Clone, Default)]
#[derive(Default)]
pub struct Entry<K, V> {
pub key: K,
pub data: V
key: K,
data: V
}
impl<K: Key, V> Entry<K, V> {
#[inline]
fn is_used(&self) -> bool {
self.key.is_used()
unsafe { self.key.is_used() }
}
#[inline]
fn clear(&mut self) {
self.key.clear()
unsafe { self.key.clear() }
}
#[inline]
fn get(&self) -> (&K, &V) {
unsafe { (&self.key, &self.data) }
}
#[inline]
fn get_mut(&mut self) -> (&K, &mut V) {
unsafe { (&self.key, &mut self.data) }
}
#[inline]
fn get_key(&self) -> &K {
unsafe { &self.key }
}
#[inline]
fn get_mut_key(&mut self) -> &mut K {
unsafe { &mut self.key }
}
#[inline]
fn get_data(&self) -> &V {
unsafe { &self.data }
}
#[inline]
fn get_mut_data(&mut self) -> &mut V {
unsafe { &mut self.data }
}
}
#[derive(Debug)]
pub enum LocateResult {
Found(usize), // Found the key at this position
@ -106,13 +135,14 @@ impl<'a, K: Key, V> Iterator for Iter<'a, K, V> {
while let Some((first, rest)) = self.0.split_first() {
self.0 = rest;
if first.is_used() {
return Some((&first.key, &first.data));
return Some(first.get())
}
}
None
}
}
#[allow(dead_code)]
pub struct IterMut<'a, K: 'static, V: 'static> (&'a mut [Entry<K, V>]);
impl<'a, K: Key, V> Iterator for IterMut<'a, K, V> {
@ -125,7 +155,7 @@ impl<'a, K: Key, V> Iterator for IterMut<'a, K, V> {
Some((first, rest)) => {
self.0 = rest;
if first.is_used() {
return Some((&first.key, &mut first.data))
return Some(first.get_mut())
}
}
}
@ -137,7 +167,7 @@ impl<'a, K: Key, V> Iterator for IterMut<'a, K, V> {
/// This method is unsafe as it potentially creates references to uninitialized memory
unsafe fn mmap_as_ref<K, V>(mmap: &MemoryMap, len: usize) -> (&'static mut Header, &'static mut [Entry<K, V>]) {
if mmap.len() < mem::size_of::<Header>() + len * mem::size_of::<Entry<K, V>>() {
panic!("Memory map too small");
tr_panic!("Memory map too small");
}
let header = &mut *(mmap.data() as *mut Header);
let ptr = mmap.data().offset(mem::size_of::<Header>() as isize) as *mut Entry<K, V>;
@ -192,12 +222,12 @@ impl<K: Key, V: Value> Index<K, V> {
max_entries: (header.capacity as f64 * MAX_USAGE) as usize,
min_entries: (header.capacity as f64 * MIN_USAGE) as usize,
entries: header.entries as usize,
fd: fd,
mmap: mmap,
data: data,
header: header
fd,
mmap,
data,
header
};
debug_assert!(index.check().is_ok(), "Inconsistent after creation");
debug_assert!(index.check().is_ok(), tr!("Inconsistent after creation"));
Ok(index)
}
@ -238,6 +268,7 @@ impl<K: Key, V: Value> Index<K, V> {
self.max_entries = (capacity as f64 * MAX_USAGE) as usize;
}
#[allow(redundant_field_names)]
fn reinsert(&mut self, start: usize, end: usize) -> Result<(), IndexError> {
for pos in start..end {
let key;
@ -302,7 +333,7 @@ impl<K: Key, V: Value> Index<K, V> {
continue;
}
entries += 1;
match self.locate(&entry.key) {
match self.locate(entry.get_key()) {
LocateResult::Found(p) if p == pos => true,
found => return Err(IndexError::WrongPosition(pos, found))
};
@ -335,6 +366,11 @@ impl<K: Key, V: Value> Index<K, V> {
self.header.capacity = self.capacity as u64;
}
#[inline]
fn get_displacement(&self, entry: &Entry<K, V>, pos: usize) -> usize {
(pos + self.capacity - (entry.get_key().hash() as usize & self.mask)) & self.mask
}
/// Finds the position for this key
/// If the key is in the table, it will be the position of the key,
/// otherwise it will be the position where this key should be inserted
@ -346,10 +382,10 @@ impl<K: Key, V: Value> Index<K, V> {
if !entry.is_used() {
return LocateResult::Hole(pos);
}
if entry.key == *key {
if entry.get_key() == key {
return LocateResult::Found(pos);
}
let odist = (pos + self.capacity - (entry.key.hash() as usize & self.mask)) & self.mask;
let odist = self.get_displacement(entry, pos);
if dist > odist {
return LocateResult::Steal(pos);
}
@ -372,12 +408,12 @@ impl<K: Key, V: Value> Index<K, V> {
// we found a hole, stop shifting here
break;
}
if entry.key.hash() as usize & self.mask == pos {
if (entry.get_key().hash() as usize & self.mask) == pos {
// we found an entry at the right position, stop shifting here
break;
}
}
self.data[last_pos] = self.data[pos].clone();
self.data.swap(last_pos, pos);
}
self.data[last_pos].clear();
}
@ -388,7 +424,7 @@ impl<K: Key, V: Value> Index<K, V> {
match self.locate(key) {
LocateResult::Found(pos) => {
let mut old = *data;
mem::swap(&mut old, &mut self.data[pos].data);
mem::swap(&mut old, self.data[pos].get_mut_data());
Ok(Some(old))
},
LocateResult::Hole(pos) => {
@ -415,8 +451,8 @@ impl<K: Key, V: Value> Index<K, V> {
cur_pos = (cur_pos + 1) & self.mask;
let entry = &mut self.data[cur_pos];
if entry.is_used() {
mem::swap(&mut stolen_key, &mut entry.key);
mem::swap(&mut stolen_data, &mut entry.data);
mem::swap(&mut stolen_key, entry.get_mut_key());
mem::swap(&mut stolen_data, entry.get_mut_data());
} else {
entry.key = stolen_key;
entry.data = stolen_data;
@ -431,7 +467,7 @@ impl<K: Key, V: Value> Index<K, V> {
#[inline]
pub fn contains(&self, key: &K) -> bool {
debug_assert!(self.check().is_ok(), "Inconsistent before get");
debug_assert!(self.check().is_ok(), tr!("Inconsistent before get"));
match self.locate(key) {
LocateResult::Found(_) => true,
_ => false
@ -440,7 +476,7 @@ impl<K: Key, V: Value> Index<K, V> {
#[inline]
pub fn pos(&self, key: &K) -> Option<usize> {
debug_assert!(self.check().is_ok(), "Inconsistent before get");
debug_assert!(self.check().is_ok(), tr!("Inconsistent before get"));
match self.locate(key) {
LocateResult::Found(pos) => Some(pos),
_ => None
@ -449,7 +485,7 @@ impl<K: Key, V: Value> Index<K, V> {
#[inline]
pub fn get(&self, key: &K) -> Option<V> {
debug_assert!(self.check().is_ok(), "Inconsistent before get");
debug_assert!(self.check().is_ok(), tr!("Inconsistent before get"));
match self.locate(key) {
LocateResult::Found(pos) => Some(self.data[pos].data),
_ => None
@ -457,11 +493,12 @@ impl<K: Key, V: Value> Index<K, V> {
}
#[inline]
#[allow(dead_code)]
pub fn modify<F>(&mut self, key: &K, mut f: F) -> bool where F: FnMut(&mut V) {
debug_assert!(self.check().is_ok(), "Inconsistent before get");
debug_assert!(self.check().is_ok(), tr!("Inconsistent before get"));
match self.locate(key) {
LocateResult::Found(pos) => {
f(&mut self.data[pos].data);
f(self.data[pos].get_mut_data());
true
},
_ => false
@ -487,7 +524,7 @@ impl<K: Key, V: Value> Index<K, V> {
while pos < self.capacity {
{
let entry = &mut self.data[pos];
if !entry.is_used() || f(&entry.key, &entry.data) {
if !entry.is_used() || f(entry.get_key(), entry.get_data()) {
pos += 1;
continue;
}
@ -507,6 +544,7 @@ impl<K: Key, V: Value> Index<K, V> {
}
#[inline]
#[allow(dead_code)]
pub fn iter_mut(&mut self) -> IterMut<K, V> {
IterMut(self.data)
}
@ -522,6 +560,7 @@ impl<K: Key, V: Value> Index<K, V> {
}
#[inline]
#[allow(dead_code)]
pub fn is_empty(&self) -> bool {
self.entries == 0
}
@ -538,4 +577,26 @@ impl<K: Key, V: Value> Index<K, V> {
}
self.entries = 0;
}
#[allow(dead_code)]
pub fn statistics(&self) -> IndexStatistics {
IndexStatistics {
count: self.entries,
capacity: self.capacity,
size: self.size(),
displacement: ValueStats::from_iter(|| self.data.iter().enumerate().filter(
|&(_, entry)| entry.is_used()).map(
|(index, entry)| self.get_displacement(entry, index) as f32))
}
}
}
#[derive(Debug)]
pub struct IndexStatistics {
pub count: usize,
pub capacity: usize,
pub size: usize,
pub displacement: ValueStats
}

View File

@ -36,9 +36,12 @@ extern crate pbr;
extern crate users;
extern crate libc;
extern crate tar;
extern crate index;
extern crate chunking;
#[macro_use]
extern crate runtime_fmt;
extern crate locale_config;
extern crate mmap;
#[macro_use] mod translation;
pub mod util;
mod bundledb;
mod repository;
@ -46,6 +49,8 @@ mod cli;
mod prelude;
mod mount;
mod chunker;
mod chunking;
mod index;
use std::process::exit;

View File

@ -113,8 +113,8 @@ impl FuseInode {
kind: convert_file_type(self.inode.file_type),
perm: self.inode.mode as u16,
nlink: 1,
uid: uid,
gid: gid,
uid,
gid,
rdev: self.inode.device.map_or(
0,
|(major, minor)| (major << 8) + minor
@ -158,7 +158,7 @@ impl<'a> FuseFilesystem<'a> {
pub fn new(repository: &'a mut Repository) -> Result<Self, RepositoryError> {
Ok(FuseFilesystem {
next_id: 1,
repository: repository,
repository,
inodes: HashMap::new()
})
}
@ -222,7 +222,7 @@ impl<'a> FuseFilesystem<'a> {
) -> FuseInodeRef {
self.add_inode(
Inode {
name: name,
name,
file_type: FileType::Directory,
..Default::default()
},
@ -240,7 +240,7 @@ impl<'a> FuseFilesystem<'a> {
group_names: HashMap<u32, String>,
) -> FuseInodeRef {
let inode = FuseInode {
inode: inode,
inode,
num: self.next_id,
parent: parent.clone(),
chunks: None,
@ -260,7 +260,7 @@ impl<'a> FuseFilesystem<'a> {
}
pub fn mount<P: AsRef<Path>>(self, mountpoint: P) -> Result<(), RepositoryError> {
Ok(try!(fuse::mount(
try!(fuse::mount(
self,
&mountpoint,
&[
@ -269,7 +269,8 @@ impl<'a> FuseFilesystem<'a> {
OsStr::new("auto_cache"),
OsStr::new("readonly"),
]
)))
));
Ok(())
}
pub fn get_inode(&mut self, num: u64) -> Option<FuseInodeRef> {
@ -523,16 +524,16 @@ impl<'a> fuse::Filesystem for FuseFilesystem<'a> {
/// Read data
/// Read should send exactly the number of bytes requested except on EOF or error,
/// otherwise the rest of the data will be substituted with zeroes. An exception to
/// this is when the file has been opened in 'direct_io' mode, in which case the
/// this is when the file has been opened in direct_io mode, in which case the
/// return value of the read system call will reflect the return value of this
/// operation. fh will contain the value set by the open method, or will be undefined
/// if the open method didn't set any value.
/// if the open method didnt set any value.
fn read(
&mut self,
_req: &fuse::Request,
ino: u64,
_fh: u64,
mut offset: u64,
mut offset: i64,
mut size: u32,
reply: fuse::ReplyData,
) {
@ -551,8 +552,8 @@ impl<'a> fuse::Filesystem for FuseFilesystem<'a> {
if let Some(ref chunks) = inode.chunks {
let mut data = Vec::with_capacity(size as usize);
for &(hash, len) in chunks.iter() {
if len as u64 <= offset {
offset -= len as u64;
if i64::from(len) <= offset {
offset -= i64::from(len);
continue;
}
let chunk = match fuse_try!(self.repository.get_chunk(hash), reply) {
@ -581,7 +582,7 @@ impl<'a> fuse::Filesystem for FuseFilesystem<'a> {
_req: &fuse::Request,
_ino: u64,
_fh: u64,
_offset: u64,
_offset: i64,
_data: &[u8],
_flags: u32,
reply: fuse::ReplyWrite,
@ -607,7 +608,7 @@ impl<'a> fuse::Filesystem for FuseFilesystem<'a> {
/// call there will be exactly one release call. The filesystem may reply with an
/// error, but error values are not returned to close() or munmap() which triggered
/// the release. fh will contain the value set by the open method, or will be undefined
/// if the open method didn't set any value. flags will contain the same flags as for
/// if the open method didnt set any value. flags will contain the same flags as for
/// open.
fn release(
&mut self,
@ -652,7 +653,7 @@ impl<'a> fuse::Filesystem for FuseFilesystem<'a> {
_req: &fuse::Request,
ino: u64,
_fh: u64,
offset: u64,
offset: i64,
mut reply: fuse::ReplyDirectory,
) {
let dir = inode!(self, ino, reply);
@ -662,7 +663,7 @@ impl<'a> fuse::Filesystem for FuseFilesystem<'a> {
if i < offset as usize {
continue;
}
if reply.add(num, i as u64 + 1, file_type, &Path::new(&name)) {
if reply.add(num, i as i64 + 1, file_type, &Path::new(&name)) {
break;
}
}

View File

@ -1,12 +1,14 @@
pub use util::*;
pub use bundledb::{BundleReader, BundleMode, BundleWriter, BundleInfo, BundleId, BundleDbError,
BundleDb, BundleWriterError, StoredBundle};
BundleDb, BundleWriterError, StoredBundle, BundleStatistics};
pub use chunker::{ChunkerType, Chunker, ChunkerStatus, ChunkerError};
pub use repository::{Repository, Backup, Config, RepositoryError, RepositoryInfo, Inode, FileType,
IntegrityError, BackupFileError, BackupError, BackupOptions, BundleAnalysis,
FileData, DiffType, InodeError, RepositoryLayout, Location};
pub use index::{Index, IndexError};
FileData, DiffType, InodeError, RepositoryLayout, Location,
RepositoryStatistics};
pub use index::{Index, IndexError, IndexStatistics};
pub use mount::FuseFilesystem;
pub use translation::CowStr;
pub use serde::{Serialize, Deserialize};

View File

@ -15,12 +15,12 @@ quick_error!{
#[allow(unknown_lints,large_enum_variant)]
pub enum BackupError {
FailedPaths(backup: Backup, failed: Vec<PathBuf>) {
description("Some paths could not be backed up")
display("Backup error: some paths could not be backed up")
description(tr!("Some paths could not be backed up"))
display("{}", tr_format!("Backup error: some paths could not be backed up"))
}
RemoveRoot {
description("The root of a backup can not be removed")
display("Backup error: the root of a backup can not be removed")
description(tr!("The root of a backup can not be removed"))
display("{}", tr_format!("Backup error: the root of a backup can not be removed"))
}
}
}
@ -73,11 +73,12 @@ impl Repository {
try!(self.write_mode());
let path = self.layout.backup_path(name);
try!(fs::create_dir_all(path.parent().unwrap()));
Ok(try!(backup.save_to(
try!(backup.save_to(
&self.crypto.lock().unwrap(),
self.config.encryption.clone(),
path
)))
));
Ok(())
}
pub fn delete_backup(&mut self, name: &str) -> Result<(), RepositoryError> {
@ -109,7 +110,7 @@ impl Repository {
Ok(backup_map) => backup_map,
Err(RepositoryError::BackupFile(BackupFileError::PartialBackupsList(backup_map,
_failed))) => {
warn!("Some backups could not be read, ignoring them");
tr_warn!("Some backups could not be read, ignoring them");
backup_map
}
Err(err) => return Err(err),
@ -238,7 +239,7 @@ impl Repository {
user.name().to_string()
);
} else {
warn!("Failed to retrieve name of user {}", inode.user);
tr_warn!("Failed to retrieve name of user {}", inode.user);
}
}
if !backup.group_names.contains_key(&inode.group) {
@ -248,7 +249,7 @@ impl Repository {
group.name().to_string()
);
} else {
warn!("Failed to retrieve name of group {}", inode.group);
tr_warn!("Failed to retrieve name of group {}", inode.group);
}
}
let mut meta_size = 0;
@ -298,7 +299,7 @@ impl Repository {
let chunks = try!(self.put_inode(&child_inode));
inode.cum_size += child_inode.cum_size;
for &(_, len) in chunks.iter() {
meta_size += len as u64;
meta_size += u64::from(len);
}
inode.cum_dirs += child_inode.cum_dirs;
inode.cum_files += child_inode.cum_files;
@ -309,7 +310,7 @@ impl Repository {
inode.cum_files = 1;
if let Some(FileData::ChunkedIndirect(ref chunks)) = inode.data {
for &(_, len) in chunks.iter() {
meta_size += len as u64;
meta_size += u64::from(len);
}
}
}
@ -357,7 +358,7 @@ impl Repository {
backup.timestamp = start.timestamp();
backup.total_data_size = root_inode.cum_size;
for &(_, len) in backup.root.iter() {
backup.total_data_size += len as u64;
backup.total_data_size += u64::from(len);
}
backup.file_count = root_inode.cum_files;
backup.dir_count = root_inode.cum_dirs;
@ -474,6 +475,7 @@ impl Repository {
Ok(versions)
}
#[allow(needless_pass_by_value)]
fn find_differences_recurse(
&mut self,
inode1: &Inode,
@ -540,4 +542,49 @@ impl Repository {
));
Ok(diffs)
}
fn count_sizes_recursive(&mut self, inode: &Inode, sizes: &mut HashMap<u64, usize>, min_size: u64) -> Result<(), RepositoryError> {
if inode.size >= min_size {
*sizes.entry(inode.size).or_insert(0) += 1;
}
if let Some(ref children) = inode.children {
for chunks in children.values() {
let ch = try!(self.get_inode(chunks));
try!(self.count_sizes_recursive(&ch, sizes, min_size));
}
}
Ok(())
}
fn find_duplicates_recursive(&mut self, inode: &Inode, path: &Path, sizes: &HashMap<u64, usize>, hashes: &mut HashMap<Hash, (Vec<PathBuf>, u64)>) -> Result<(), RepositoryError> {
let path = path.join(&inode.name);
if sizes.get(&inode.size).cloned().unwrap_or(0) > 1 {
if let Some(ref data) = inode.data {
let chunk_data = try!(msgpack::encode(data).map_err(InodeError::from));
let hash = HashMethod::Blake2.hash(&chunk_data);
hashes.entry(hash).or_insert((Vec::new(), inode.size)).0.push(path.clone());
}
}
if let Some(ref children) = inode.children {
for chunks in children.values() {
let ch = try!(self.get_inode(chunks));
try!(self.find_duplicates_recursive(&ch, &path, sizes, hashes));
}
}
Ok(())
}
pub fn find_duplicates(&mut self, inode: &Inode, min_size: u64) -> Result<Vec<(Vec<PathBuf>, u64)>, RepositoryError> {
let mut sizes = HashMap::new();
try!(self.count_sizes_recursive(inode, &mut sizes, min_size));
let mut hashes = HashMap::new();
if let Some(ref children) = inode.children {
for chunks in children.values() {
let ch = try!(self.get_inode(chunks));
try!(self.find_duplicates_recursive(&ch, Path::new(""), &sizes, &mut hashes));
}
}
let dups = hashes.into_iter().map(|(_,v)| v).filter(|&(ref v, _)| v.len() > 1).collect();
Ok(dups)
}
}

View File

@ -15,49 +15,49 @@ quick_error!{
pub enum BackupFileError {
Read(err: io::Error, path: PathBuf) {
cause(err)
description("Failed to read backup")
display("Backup file error: failed to read backup file {:?}\n\tcaused by: {}", path, err)
description(tr!("Failed to read backup"))
display("{}", tr_format!("Backup file error: failed to read backup file {:?}\n\tcaused by: {}", path, err))
}
Write(err: io::Error, path: PathBuf) {
cause(err)
description("Failed to write backup")
display("Backup file error: failed to write backup file {:?}\n\tcaused by: {}", path, err)
description(tr!("Failed to write backup"))
display("{}", tr_format!("Backup file error: failed to write backup file {:?}\n\tcaused by: {}", path, err))
}
Decode(err: msgpack::DecodeError, path: PathBuf) {
cause(err)
context(path: &'a Path, err: msgpack::DecodeError) -> (err, path.to_path_buf())
description("Failed to decode backup")
display("Backup file error: failed to decode backup of {:?}\n\tcaused by: {}", path, err)
description(tr!("Failed to decode backup"))
display("{}", tr_format!("Backup file error: failed to decode backup of {:?}\n\tcaused by: {}", path, err))
}
Encode(err: msgpack::EncodeError, path: PathBuf) {
cause(err)
context(path: &'a Path, err: msgpack::EncodeError) -> (err, path.to_path_buf())
description("Failed to encode backup")
display("Backup file error: failed to encode backup of {:?}\n\tcaused by: {}", path, err)
description(tr!("Failed to encode backup"))
display("{}", tr_format!("Backup file error: failed to encode backup of {:?}\n\tcaused by: {}", path, err))
}
WrongHeader(path: PathBuf) {
description("Wrong header")
display("Backup file error: wrong header on backup {:?}", path)
description(tr!("Wrong header"))
display("{}", tr_format!("Backup file error: wrong header on backup {:?}", path))
}
UnsupportedVersion(path: PathBuf, version: u8) {
description("Wrong version")
display("Backup file error: unsupported version on backup {:?}: {}", path, version)
description(tr!("Wrong version"))
display("{}", tr_format!("Backup file error: unsupported version on backup {:?}: {}", path, version))
}
Decryption(err: EncryptionError, path: PathBuf) {
cause(err)
context(path: &'a Path, err: EncryptionError) -> (err, path.to_path_buf())
description("Decryption failed")
display("Backup file error: decryption failed on backup {:?}\n\tcaused by: {}", path, err)
description(tr!("Decryption failed"))
display("{}", tr_format!("Backup file error: decryption failed on backup {:?}\n\tcaused by: {}", path, err))
}
Encryption(err: EncryptionError) {
from()
cause(err)
description("Encryption failed")
display("Backup file error: encryption failed\n\tcaused by: {}", err)
description(tr!("Encryption failed"))
display("{}", tr_format!("Backup file error: encryption failed\n\tcaused by: {}", err))
}
PartialBackupsList(partial: HashMap<String, Backup>, failed: Vec<PathBuf>) {
description("Some backups could not be loaded")
display("Backup file error: some backups could not be loaded: {:?}", failed)
description(tr!("Some backups could not be loaded"))
display("{}", tr_format!("Backup file error: some backups could not be loaded: {:?}", failed))
}
}
}
@ -164,7 +164,7 @@ impl Backup {
try!(file.write_all(&[HEADER_VERSION]).map_err(|err| {
BackupFileError::Write(err, path.to_path_buf())
}));
let header = BackupHeader { encryption: encryption };
let header = BackupHeader { encryption };
try!(msgpack::encode_to_stream(&header, &mut file).context(path));
try!(file.write_all(&data).map_err(|err| {
BackupFileError::Write(err, path.to_path_buf())
@ -180,7 +180,7 @@ impl Backup {
let base_path = path.as_ref();
let path = path.as_ref();
if !path.exists() {
debug!("Backup root folder does not exist");
tr_debug!("Backup root folder does not exist");
return Ok(backups);
}
let mut paths = vec![path.to_path_buf()];

View File

@ -16,7 +16,7 @@ pub struct ChunkReader<'a> {
impl<'a> ChunkReader<'a> {
pub fn new(repo: &'a mut Repository, chunks: ChunkList) -> Self {
ChunkReader {
repo: repo,
repo,
chunks: chunks.into_inner().into(),
data: vec![],
pos: 0
@ -25,7 +25,7 @@ impl<'a> ChunkReader<'a> {
}
impl<'a> Read for ChunkReader<'a> {
fn read(&mut self, mut buf: &mut [u8]) -> Result<usize, io::Error> {
fn read(&mut self, buf: &mut [u8]) -> Result<usize, io::Error> {
let mut bpos = 0;
loop {
if buf.len() == bpos {

View File

@ -16,24 +16,24 @@ quick_error!{
Io(err: io::Error) {
from()
cause(err)
description("Failed to read/write bundle map")
description(tr!("Failed to read/write bundle map"))
}
Decode(err: msgpack::DecodeError) {
from()
cause(err)
description("Failed to decode bundle map")
description(tr!("Failed to decode bundle map"))
}
Encode(err: msgpack::EncodeError) {
from()
cause(err)
description("Failed to encode bundle map")
description(tr!("Failed to encode bundle map"))
}
WrongHeader {
description("Wrong header")
description(tr!("Wrong header"))
}
WrongVersion(version: u8) {
description("Wrong version")
display("Wrong version: {}", version)
description(tr!("Wrong version"))
display("{}", tr_format!("Wrong version: {}", version))
}
}
}

View File

@ -16,22 +16,22 @@ quick_error!{
}
Parse(reason: &'static str) {
from()
description("Failed to parse config")
display("Failed to parse config: {}", reason)
description(tr!("Failed to parse config"))
display("{}", tr_format!("Failed to parse config: {}", reason))
}
Yaml(err: serde_yaml::Error) {
from()
cause(err)
description("Yaml format error")
display("Yaml format error: {}", err)
description(tr!("Yaml format error"))
display("{}", tr_format!("Yaml format error: {}", err))
}
}
}
impl HashMethod {
fn from_yaml(yaml: String) -> Result<Self, ConfigError> {
HashMethod::from(&yaml).map_err(ConfigError::Parse)
fn from_yaml(yaml: &str) -> Result<Self, ConfigError> {
HashMethod::from(yaml).map_err(ConfigError::Parse)
}
fn to_yaml(&self) -> String {
@ -61,7 +61,7 @@ serde_impl!(ChunkerYaml(String) {
});
impl ChunkerType {
fn from_yaml(yaml: ChunkerYaml) -> Result<Self, ConfigError> {
fn from_yaml(yaml: &ChunkerYaml) -> Result<Self, ConfigError> {
ChunkerType::from(&yaml.method, yaml.avg_size, yaml.seed).map_err(ConfigError::Parse)
}
@ -78,8 +78,8 @@ impl ChunkerType {
impl Compression {
#[inline]
fn from_yaml(yaml: String) -> Result<Self, ConfigError> {
Compression::from_string(&yaml).map_err(|_| ConfigError::Parse("Invalid codec"))
fn from_yaml(yaml: &str) -> Result<Self, ConfigError> {
Compression::from_string(yaml).map_err(|_| ConfigError::Parse(tr!("Invalid codec")))
}
#[inline]
@ -91,8 +91,8 @@ impl Compression {
impl EncryptionMethod {
#[inline]
fn from_yaml(yaml: String) -> Result<Self, ConfigError> {
EncryptionMethod::from_string(&yaml).map_err(|_| ConfigError::Parse("Invalid codec"))
fn from_yaml(yaml: &str) -> Result<Self, ConfigError> {
EncryptionMethod::from_string(yaml).map_err(|_| ConfigError::Parse(tr!("Invalid codec")))
}
#[inline]
@ -179,25 +179,25 @@ serde_impl!(Config(u64) {
impl Config {
fn from_yaml(yaml: ConfigYaml) -> Result<Self, ConfigError> {
let compression = if let Some(c) = yaml.compression {
Some(try!(Compression::from_yaml(c)))
Some(try!(Compression::from_yaml(&c)))
} else {
None
};
let encryption = if let Some(e) = yaml.encryption {
let method = try!(EncryptionMethod::from_yaml(e.method));
let method = try!(EncryptionMethod::from_yaml(&e.method));
let key = try!(parse_hex(&e.key).map_err(|_| {
ConfigError::Parse("Invalid public key")
ConfigError::Parse(tr!("Invalid public key"))
}));
Some((method, key.into()))
} else {
None
};
Ok(Config {
compression: compression,
encryption: encryption,
compression,
encryption,
bundle_size: yaml.bundle_size,
chunker: try!(ChunkerType::from_yaml(yaml.chunker)),
hash: try!(HashMethod::from_yaml(yaml.hash))
chunker: try!(ChunkerType::from_yaml(&yaml.chunker)),
hash: try!(HashMethod::from_yaml(&yaml.hash))
})
}

View File

@ -15,95 +15,95 @@ quick_error!{
#[allow(unknown_lints,large_enum_variant)]
pub enum RepositoryError {
NoRemote {
description("Remote storage not found")
display("Repository error: The remote storage has not been found, may be it needs to be mounted?")
description(tr!("Remote storage not found"))
display("{}", tr_format!("Repository error: The remote storage has not been found, may be it needs to be mounted?"))
}
Index(err: IndexError) {
from()
cause(err)
description("Index error")
display("Repository error: index error\n\tcaused by: {}", err)
description(tr!("Index error"))
display("{}", tr_format!("Repository error: index error\n\tcaused by: {}", err))
}
BundleDb(err: BundleDbError) {
from()
cause(err)
description("Bundle error")
display("Repository error: bundle db error\n\tcaused by: {}", err)
description(tr!("Bundle error"))
display("{}", tr_format!("Repository error: bundle db error\n\tcaused by: {}", err))
}
BundleWriter(err: BundleWriterError) {
from()
cause(err)
description("Bundle write error")
display("Repository error: failed to write to new bundle\n\tcaused by: {}", err)
description(tr!("Bundle write error"))
display("{}", tr_format!("Repository error: failed to write to new bundle\n\tcaused by: {}", err))
}
BackupFile(err: BackupFileError) {
from()
cause(err)
description("Backup file error")
display("Repository error: backup file error\n\tcaused by: {}", err)
description(tr!("Backup file error"))
display("{}", tr_format!("Repository error: backup file error\n\tcaused by: {}", err))
}
Chunker(err: ChunkerError) {
from()
cause(err)
description("Chunker error")
display("Repository error: failed to chunk data\n\tcaused by: {}", err)
description(tr!("Chunker error"))
display("{}", tr_format!("Repository error: failed to chunk data\n\tcaused by: {}", err))
}
Config(err: ConfigError) {
from()
cause(err)
description("Configuration error")
display("Repository error: configuration error\n\tcaused by: {}", err)
description(tr!("Configuration error"))
display("{}", tr_format!("Repository error: configuration error\n\tcaused by: {}", err))
}
Inode(err: InodeError) {
from()
cause(err)
description("Inode error")
display("Repository error: inode error\n\tcaused by: {}", err)
description(tr!("Inode error"))
display("{}", tr_format!("Repository error: inode error\n\tcaused by: {}", err))
}
LoadKeys(err: EncryptionError) {
from()
cause(err)
description("Failed to load keys")
display("Repository error: failed to load keys\n\tcaused by: {}", err)
description(tr!("Failed to load keys"))
display("{}", tr_format!("Repository error: failed to load keys\n\tcaused by: {}", err))
}
BundleMap(err: BundleMapError) {
from()
cause(err)
description("Bundle map error")
display("Repository error: bundle map error\n\tcaused by: {}", err)
description(tr!("Bundle map error"))
display("{}", tr_format!("Repository error: bundle map error\n\tcaused by: {}", err))
}
Integrity(err: IntegrityError) {
from()
cause(err)
description("Integrity error")
display("Repository error: integrity error\n\tcaused by: {}", err)
description(tr!("Integrity error"))
display("{}", tr_format!("Repository error: integrity error\n\tcaused by: {}", err))
}
Dirty {
description("Dirty repository")
display("The repository is dirty, please run a check")
description(tr!("Dirty repository"))
display("{}", tr_format!("The repository is dirty, please run a check"))
}
Backup(err: BackupError) {
from()
cause(err)
description("Failed to create a backup")
display("Repository error: failed to create backup\n\tcaused by: {}", err)
description(tr!("Failed to create a backup"))
display("{}", tr_format!("Repository error: failed to create backup\n\tcaused by: {}", err))
}
Lock(err: LockError) {
from()
cause(err)
description("Failed to obtain lock")
display("Repository error: failed to obtain lock\n\tcaused by: {}", err)
description(tr!("Failed to obtain lock"))
display("{}", tr_format!("Repository error: failed to obtain lock\n\tcaused by: {}", err))
}
Io(err: io::Error) {
from()
cause(err)
description("IO error")
display("IO error: {}", err)
description(tr!("IO error"))
display("{}", tr_format!("IO error: {}", err))
}
NoSuchFileInBackup(backup: Backup, path: PathBuf) {
description("No such file in backup")
display("The backup does not contain the file {:?}", path)
description(tr!("No such file in backup"))
display("{}", tr_format!("The backup does not contain the file {:?}", path))
}
}
}

View File

@ -39,6 +39,13 @@ pub struct RepositoryInfo {
}
#[derive(Debug)]
pub struct RepositoryStatistics {
pub index: IndexStatistics,
pub bundles: BundleStatistics
}
impl Repository {
fn mark_used(
&self,
@ -48,7 +55,8 @@ impl Repository {
let mut new = false;
for &(hash, len) in chunks {
if let Some(pos) = self.index.get(&hash) {
if let Some(bundle) = bundles.get_mut(&pos.bundle) {
let bundle = pos.bundle;
if let Some(bundle) = bundles.get_mut(&bundle) {
if !bundle.chunk_usage.get(pos.chunk as usize) {
new = true;
bundle.chunk_usage.set(pos.chunk as usize);
@ -136,9 +144,9 @@ impl Repository {
let chunk_count = bundles.iter().map(|b| b.chunk_count).sum();
RepositoryInfo {
bundle_count: bundles.len(),
chunk_count: chunk_count,
encoded_data_size: encoded_data_size,
raw_data_size: raw_data_size,
chunk_count,
encoded_data_size,
raw_data_size,
compression_ratio: encoded_data_size as f32 / raw_data_size as f32,
avg_chunk_size: raw_data_size as f32 / chunk_count as f32,
index_size: self.index.size(),
@ -146,4 +154,12 @@ impl Repository {
index_entries: self.index.len()
}
}
#[allow(dead_code)]
pub fn statistics(&self) -> RepositoryStatistics {
RepositoryStatistics {
index: self.index.statistics(),
bundles: self.bundles.statistics()
}
}
}

View File

@ -12,36 +12,36 @@ quick_error!{
#[derive(Debug)]
pub enum IntegrityError {
MissingChunk(hash: Hash) {
description("Missing chunk")
display("Missing chunk: {}", hash)
description(tr!("Missing chunk"))
display("{}", tr_format!("Missing chunk: {}", hash))
}
MissingBundleId(id: u32) {
description("Missing bundle")
display("Missing bundle: {}", id)
description(tr!("Missing bundle"))
display("{}", tr_format!("Missing bundle: {}", id))
}
MissingBundle(id: BundleId) {
description("Missing bundle")
display("Missing bundle: {}", id)
description(tr!("Missing bundle"))
display("{}", tr_format!("Missing bundle: {}", id))
}
NoSuchChunk(bundle: BundleId, chunk: u32) {
description("No such chunk")
display("Bundle {} does not contain the chunk {}", bundle, chunk)
description(tr!("No such chunk"))
display("{}", tr_format!("Bundle {} does not contain the chunk {}", bundle, chunk))
}
RemoteBundlesNotInMap {
description("Remote bundles missing from map")
description(tr!("Remote bundles missing from map"))
}
MapContainsDuplicates {
description("Map contains duplicates")
description(tr!("Map contains duplicates"))
}
BrokenInode(path: PathBuf, err: Box<RepositoryError>) {
cause(err)
description("Broken inode")
display("Broken inode: {:?}\n\tcaused by: {}", path, err)
description(tr!("Broken inode"))
display("{}", tr_format!("Broken inode: {:?}\n\tcaused by: {}", path, err))
}
MissingInodeData(path: PathBuf, err: Box<RepositoryError>) {
cause(err)
description("Missing inode data")
display("Missing inode data in: {:?}\n\tcaused by: {}", path, err)
description(tr!("Missing inode data"))
display("{}", tr_format!("Missing inode data in: {:?}\n\tcaused by: {}", path, err))
}
}
}
@ -49,7 +49,7 @@ quick_error!{
impl Repository {
fn check_index_chunks(&self) -> Result<(), RepositoryError> {
let mut progress = ProgressBar::new(self.index.len() as u64);
progress.message("checking index: ");
progress.message(tr!("checking index: "));
progress.set_max_refresh_rate(Some(Duration::from_millis(100)));
for (count, (_hash, location)) in self.index.iter().enumerate() {
// Lookup bundle id from map
@ -58,12 +58,12 @@ impl Repository {
let bundle = if let Some(bundle) = self.bundles.get_bundle_info(&bundle_id) {
bundle
} else {
progress.finish_print("checking index: done.");
progress.finish_print(tr!("checking index: done."));
return Err(IntegrityError::MissingBundle(bundle_id.clone()).into());
};
// Get chunk from bundle
if bundle.info.chunk_count <= location.chunk as usize {
progress.finish_print("checking index: done.");
progress.finish_print(tr!("checking index: done."));
return Err(
IntegrityError::NoSuchChunk(bundle_id.clone(), location.chunk).into()
);
@ -72,7 +72,7 @@ impl Repository {
progress.set(count as u64);
}
}
progress.finish_print("checking index: done.");
progress.finish_print(tr!("checking index: done."));
Ok(())
}
@ -108,10 +108,11 @@ impl Repository {
try!(self.check_chunks(checked, chunks, true));
}
Some(FileData::ChunkedIndirect(ref chunks)) => {
if try!(self.check_chunks(checked, chunks, true)) {
if try!(self.check_chunks(checked, chunks, false)) {
let chunk_data = try!(self.get_data(chunks));
let chunks = ChunkList::read_from(&chunk_data);
try!(self.check_chunks(checked, &chunks, true));
let chunks2 = ChunkList::read_from(&chunk_data);
try!(self.check_chunks(checked, &chunks2, true));
try!(self.check_chunks(checked, chunks, true));
}
}
}
@ -135,12 +136,12 @@ impl Repository {
// Mark the content chunks as used
if let Err(err) = self.check_inode_contents(&inode, checked) {
if repair {
warn!(
tr_warn!(
"Problem detected: data of {:?} is corrupt\n\tcaused by: {}",
path,
err
);
info!("Removing inode data");
tr_info!("Removing inode data");
inode.data = Some(FileData::Inline(vec![].into()));
inode.size = 0;
modified = true;
@ -160,12 +161,12 @@ impl Repository {
}
Err(err) => {
if repair {
warn!(
tr_warn!(
"Problem detected: inode {:?} is corrupt\n\tcaused by: {}",
path.join(name),
err
);
info!("Removing broken inode from backup");
tr_info!("Removing broken inode from backup");
removed.push(name.to_string());
modified = true;
} else {
@ -187,7 +188,7 @@ impl Repository {
}
fn evacuate_broken_backup(&self, name: &str) -> Result<(), RepositoryError> {
warn!(
tr_warn!(
"The backup {} was corrupted and needed to be modified.",
name
);
@ -202,7 +203,7 @@ impl Repository {
try!(fs::copy(&src, &dst));
try!(fs::remove_file(&src));
}
info!("The original backup was renamed to {:?}", dst);
tr_info!("The original backup was renamed to {:?}", dst);
Ok(())
}
@ -219,7 +220,7 @@ impl Repository {
} else {
None
};
info!("Checking backup...");
tr_info!("Checking backup...");
let mut checked = Bitmap::new(self.index.capacity());
match self.check_subtree(
Path::new("").to_path_buf(),
@ -237,7 +238,7 @@ impl Repository {
}
Err(err) => {
if repair {
warn!(
tr_warn!(
"The root of the backup {} has been corrupted\n\tcaused by: {}",
name,
err
@ -264,19 +265,19 @@ impl Repository {
} else {
None
};
info!("Checking inode...");
tr_info!("Checking inode...");
let mut checked = Bitmap::new(self.index.capacity());
let mut inodes = try!(self.get_backup_path(backup, path));
let mut inode = inodes.pop().unwrap();
let mut modified = false;
if let Err(err) = self.check_inode_contents(&inode, &mut checked) {
if repair {
warn!(
tr_warn!(
"Problem detected: data of {:?} is corrupt\n\tcaused by: {}",
path,
err
);
info!("Removing inode data");
tr_info!("Removing inode data");
inode.data = Some(FileData::Inline(vec![].into()));
inode.size = 0;
modified = true;
@ -297,12 +298,12 @@ impl Repository {
}
Err(err) => {
if repair {
warn!(
tr_warn!(
"Problem detected: inode {:?} is corrupt\n\tcaused by: {}",
path.join(name),
err
);
info!("Removing broken inode from backup");
tr_info!("Removing broken inode from backup");
removed.push(name.to_string());
modified = true;
} else {
@ -338,19 +339,19 @@ impl Repository {
} else {
None
};
info!("Checking backups...");
tr_info!("Checking backups...");
let mut checked = Bitmap::new(self.index.capacity());
let backup_map = match self.get_all_backups() {
Ok(backup_map) => backup_map,
Err(RepositoryError::BackupFile(BackupFileError::PartialBackupsList(backup_map,
_failed))) => {
warn!("Some backups could not be read, ignoring them");
tr_warn!("Some backups could not be read, ignoring them");
backup_map
}
Err(err) => return Err(err),
};
for (name, mut backup) in
ProgressIter::new("checking backups", backup_map.len(), backup_map.into_iter())
ProgressIter::new(tr!("checking backups"), backup_map.len(), backup_map.into_iter())
{
let path = format!("{}::", name);
match self.check_subtree(
@ -369,7 +370,7 @@ impl Repository {
}
Err(err) => {
if repair {
warn!(
tr_warn!(
"The root of the backup {} has been corrupted\n\tcaused by: {}",
name,
err
@ -385,12 +386,12 @@ impl Repository {
}
pub fn check_repository(&mut self, repair: bool) -> Result<(), RepositoryError> {
info!("Checking repository integrity...");
tr_info!("Checking repository integrity...");
let mut rebuild = false;
for (_id, bundle_id) in self.bundle_map.bundles() {
if self.bundles.get_bundle_info(&bundle_id).is_none() {
if repair {
warn!(
tr_warn!(
"Problem detected: bundle map contains unknown bundle {}",
bundle_id
);
@ -402,7 +403,7 @@ impl Repository {
}
if self.bundle_map.len() < self.bundles.len() {
if repair {
warn!("Problem detected: bundle map does not contain all remote bundles");
tr_warn!("Problem detected: bundle map does not contain all remote bundles");
rebuild = true;
} else {
return Err(IntegrityError::RemoteBundlesNotInMap.into());
@ -410,7 +411,7 @@ impl Repository {
}
if self.bundle_map.len() > self.bundles.len() {
if repair {
warn!("Problem detected: bundle map contains bundles multiple times");
tr_warn!("Problem detected: bundle map contains bundles multiple times");
rebuild = true;
} else {
return Err(IntegrityError::MapContainsDuplicates.into());
@ -424,7 +425,7 @@ impl Repository {
}
pub fn rebuild_bundle_map(&mut self) -> Result<(), RepositoryError> {
info!("Rebuilding bundle map from bundles");
tr_info!("Rebuilding bundle map from bundles");
self.bundle_map = BundleMap::create();
for bundle in self.bundles.list_bundles() {
let bundle_id = match bundle.mode {
@ -443,11 +444,11 @@ impl Repository {
}
pub fn rebuild_index(&mut self) -> Result<(), RepositoryError> {
info!("Rebuilding index from bundles");
tr_info!("Rebuilding index from bundles");
self.index.clear();
let mut bundles = self.bundle_map.bundles();
bundles.sort_by_key(|&(_, ref v)| v.clone());
for (num, id) in bundles {
for (num, id) in ProgressIter::new(tr!("Rebuilding index from bundles"), bundles.len(), bundles.into_iter()) {
let chunks = try!(self.bundles.get_chunk_list(&id));
for (i, (hash, _len)) in chunks.into_inner().into_iter().enumerate() {
try!(self.index.set(
@ -467,10 +468,10 @@ impl Repository {
if repair {
try!(self.write_mode());
}
info!("Checking index integrity...");
tr_info!("Checking index integrity...");
if let Err(err) = self.index.check() {
if repair {
warn!(
tr_warn!(
"Problem detected: index was corrupted\n\tcaused by: {}",
err
);
@ -479,16 +480,16 @@ impl Repository {
return Err(err.into());
}
}
info!("Checking index entries...");
tr_info!("Checking index entries...");
if let Err(err) = self.check_index_chunks() {
if repair {
warn!(
tr_warn!(
"Problem detected: index entries were inconsistent\n\tcaused by: {}",
err
);
return self.rebuild_index();
} else {
return Err(err.into());
return Err(err);
}
}
Ok(())
@ -499,10 +500,10 @@ impl Repository {
if repair {
try!(self.write_mode());
}
info!("Checking bundle integrity...");
tr_info!("Checking bundle integrity...");
if try!(self.bundles.check(full, repair)) {
// Some bundles got repaired
warn!("Some bundles have been rewritten, please remove the broken bundles manually.");
tr_warn!("Some bundles have been rewritten, please remove the broken bundles manually.");
try!(self.rebuild_bundle_map());
try!(self.rebuild_index());
}

View File

@ -19,44 +19,44 @@ quick_error!{
#[derive(Debug)]
pub enum InodeError {
UnsupportedFiletype(path: PathBuf) {
description("Unsupported file type")
display("Inode error: file {:?} has an unsupported type", path)
description(tr!("Unsupported file type"))
display("{}", tr_format!("Inode error: file {:?} has an unsupported type", path))
}
ReadMetadata(err: io::Error, path: PathBuf) {
cause(err)
description("Failed to obtain metadata for file")
display("Inode error: failed to obtain metadata for file {:?}\n\tcaused by: {}", path, err)
description(tr!("Failed to obtain metadata for file"))
display("{}", tr_format!("Inode error: failed to obtain metadata for file {:?}\n\tcaused by: {}", path, err))
}
ReadXattr(err: io::Error, path: PathBuf) {
cause(err)
description("Failed to obtain xattr for file")
display("Inode error: failed to obtain xattr for file {:?}\n\tcaused by: {}", path, err)
description(tr!("Failed to obtain xattr for file"))
display("{}", tr_format!("Inode error: failed to obtain xattr for file {:?}\n\tcaused by: {}", path, err))
}
ReadLinkTarget(err: io::Error, path: PathBuf) {
cause(err)
description("Failed to obtain link target for file")
display("Inode error: failed to obtain link target for file {:?}\n\tcaused by: {}", path, err)
description(tr!("Failed to obtain link target for file"))
display("{}", tr_format!("Inode error: failed to obtain link target for file {:?}\n\tcaused by: {}", path, err))
}
Create(err: io::Error, path: PathBuf) {
cause(err)
description("Failed to create entity")
display("Inode error: failed to create entity {:?}\n\tcaused by: {}", path, err)
description(tr!("Failed to create entity"))
display("{}", tr_format!("Inode error: failed to create entity {:?}\n\tcaused by: {}", path, err))
}
Integrity(reason: &'static str) {
description("Integrity error")
display("Inode error: inode integrity error: {}", reason)
description(tr!("Integrity error"))
display("{}", tr_format!("Inode error: inode integrity error: {}", reason))
}
Decode(err: msgpack::DecodeError) {
from()
cause(err)
description("Failed to decode metadata")
display("Inode error: failed to decode metadata\n\tcaused by: {}", err)
description(tr!("Failed to decode metadata"))
display("{}", tr_format!("Inode error: failed to decode metadata\n\tcaused by: {}", err))
}
Encode(err: msgpack::EncodeError) {
from()
cause(err)
description("Failed to encode metadata")
display("Inode error: failed to encode metadata\n\tcaused by: {}", err)
description(tr!("Failed to encode metadata"))
display("{}", tr_format!("Inode error: failed to encode metadata\n\tcaused by: {}", err))
}
}
}
@ -82,12 +82,12 @@ serde_impl!(FileType(u8) {
impl fmt::Display for FileType {
fn fmt(&self, format: &mut fmt::Formatter) -> Result<(), fmt::Error> {
match *self {
FileType::File => write!(format, "file"),
FileType::Directory => write!(format, "directory"),
FileType::Symlink => write!(format, "symlink"),
FileType::BlockDevice => write!(format, "block device"),
FileType::CharDevice => write!(format, "char device"),
FileType::NamedPipe => write!(format, "named pipe"),
FileType::File => write!(format, "{}", tr!("file")),
FileType::Directory => write!(format, "{}", tr!("directory")),
FileType::Symlink => write!(format, "{}", tr!("symlink")),
FileType::BlockDevice => write!(format, "{}", tr!("block device")),
FileType::CharDevice => write!(format, "{}", tr!("char device")),
FileType::NamedPipe => write!(format, "{}", tr!("named pipe")),
}
}
}
@ -249,13 +249,13 @@ impl Inode {
InodeError::Create(e, full_path.clone())
}));
} else {
return Err(InodeError::Integrity("Symlink without target"));
return Err(InodeError::Integrity(tr!("Symlink without target")));
}
}
FileType::NamedPipe => {
let name = try!(
ffi::CString::new(full_path.as_os_str().as_bytes())
.map_err(|_| InodeError::Integrity("Name contains nulls"))
.map_err(|_| InodeError::Integrity(tr!("Name contains nulls")))
);
let mode = self.mode | libc::S_IFIFO;
if unsafe { libc::mkfifo(name.as_ptr(), mode) } != 0 {
@ -268,7 +268,7 @@ impl Inode {
FileType::BlockDevice | FileType::CharDevice => {
let name = try!(
ffi::CString::new(full_path.as_os_str().as_bytes())
.map_err(|_| InodeError::Integrity("Name contains nulls"))
.map_err(|_| InodeError::Integrity(tr!("Name contains nulls")))
);
let mode = self.mode |
match self.file_type {
@ -279,7 +279,7 @@ impl Inode {
let device = if let Some((major, minor)) = self.device {
unsafe { libc::makedev(major, minor) }
} else {
return Err(InodeError::Integrity("Device without id"));
return Err(InodeError::Integrity(tr!("Device without id")));
};
if unsafe { libc::mknod(name.as_ptr(), mode, device) } != 0 {
return Err(InodeError::Create(
@ -291,21 +291,21 @@ impl Inode {
}
let time = FileTime::from_seconds_since_1970(self.timestamp as u64, 0);
if let Err(err) = filetime::set_file_times(&full_path, time, time) {
warn!("Failed to set file time on {:?}: {}", full_path, err);
tr_warn!("Failed to set file time on {:?}: {}", full_path, err);
}
if !self.xattrs.is_empty() {
if xattr::SUPPORTED_PLATFORM {
for (name, data) in &self.xattrs {
if let Err(err) = xattr::set(&full_path, name, data) {
warn!("Failed to set xattr {} on {:?}: {}", name, full_path, err);
tr_warn!("Failed to set xattr {} on {:?}: {}", name, full_path, err);
}
}
} else {
warn!("Not setting xattr on {:?}", full_path);
tr_warn!("Not setting xattr on {:?}", full_path);
}
}
if let Err(err) = fs::set_permissions(&full_path, Permissions::from_mode(self.mode)) {
warn!(
tr_warn!(
"Failed to set permissions {:o} on {:?}: {}",
self.mode,
full_path,
@ -313,7 +313,7 @@ impl Inode {
);
}
if let Err(err) = chown(&full_path, self.user, self.group) {
warn!(
tr_warn!(
"Failed to set user {} and group {} on {:?}: {}",
self.user,
self.group,

View File

@ -27,19 +27,18 @@ pub use self::metadata::{Inode, FileType, FileData, InodeError};
pub use self::backup::{BackupError, BackupOptions, DiffType};
pub use self::backup_file::{Backup, BackupFileError};
pub use self::integrity::IntegrityError;
pub use self::info::{RepositoryInfo, BundleAnalysis};
pub use self::info::{RepositoryInfo, BundleAnalysis, RepositoryStatistics};
pub use self::layout::RepositoryLayout;
use self::bundle_map::BundleMap;
const REPOSITORY_README: &'static [u8] = include_bytes!("../../docs/repository_readme.md");
const DEFAULT_EXCLUDES: &'static [u8] = include_bytes!("../../docs/excludes.default");
const REPOSITORY_README: &[u8] = include_bytes!("../../docs/repository_readme.md");
const DEFAULT_EXCLUDES: &[u8] = include_bytes!("../../docs/excludes.default");
const INDEX_MAGIC: [u8; 7] = *b"zvault\x02";
const INDEX_VERSION: u8 = 1;
#[repr(packed)]
#[derive(Clone, Copy, PartialEq, Debug, Default)]
pub struct Location {
pub bundle: u32,
@ -48,8 +47,8 @@ pub struct Location {
impl Location {
pub fn new(bundle: u32, chunk: u32) -> Self {
Location {
bundle: bundle,
chunk: chunk
bundle,
chunk
}
}
}
@ -93,7 +92,7 @@ pub struct Repository {
impl Repository {
pub fn create<P: AsRef<Path>, R: AsRef<Path>>(
path: P,
config: Config,
config: &Config,
remote: R,
) -> Result<Self, RepositoryError> {
let layout = RepositoryLayout::new(path.as_ref().to_path_buf());
@ -111,7 +110,7 @@ impl Repository {
));
try!(fs::create_dir_all(layout.remote_locks_path()));
try!(config.save(layout.config_path()));
try!(BundleDb::create(layout.clone()));
try!(BundleDb::create(&layout));
try!(Index::<Hash, Location>::create(
layout.index_path(),
&INDEX_MAGIC,
@ -119,11 +118,11 @@ impl Repository {
));
try!(BundleMap::create().save(layout.bundle_map_path()));
try!(fs::create_dir_all(layout.backups_path()));
Self::open(path)
Self::open(path, true)
}
#[allow(unknown_lints, useless_let_if_seq)]
pub fn open<P: AsRef<Path>>(path: P) -> Result<Self, RepositoryError> {
pub fn open<P: AsRef<Path>>(path: P, online: bool) -> Result<Self, RepositoryError> {
let layout = RepositoryLayout::new(path.as_ref().to_path_buf());
if !layout.remote_exists() {
return Err(RepositoryError::NoRemote);
@ -134,12 +133,12 @@ impl Repository {
let local_locks = LockFolder::new(layout.local_locks_path());
let lock = try!(local_locks.lock(false));
let crypto = Arc::new(Mutex::new(try!(Crypto::open(layout.keys_path()))));
let (bundles, new, gone) = try!(BundleDb::open(layout.clone(), crypto.clone()));
let (bundles, new, gone) = try!(BundleDb::open(layout.clone(), crypto.clone(), online));
let (index, mut rebuild_index) =
match unsafe { Index::open(layout.index_path(), &INDEX_MAGIC, INDEX_VERSION) } {
Ok(index) => (index, false),
Err(err) => {
error!("Failed to load local index:\n\tcaused by: {}", err);
tr_error!("Failed to load local index:\n\tcaused by: {}", err);
(
try!(Index::create(
layout.index_path(),
@ -153,48 +152,48 @@ impl Repository {
let (bundle_map, rebuild_bundle_map) = match BundleMap::load(layout.bundle_map_path()) {
Ok(bundle_map) => (bundle_map, false),
Err(err) => {
error!("Failed to load local bundle map:\n\tcaused by: {}", err);
tr_error!("Failed to load local bundle map:\n\tcaused by: {}", err);
(BundleMap::create(), true)
}
};
let dirty = layout.dirtyfile_path().exists();
let mut repo = Repository {
layout: layout,
layout,
dirty: true,
chunker: config.chunker.create(),
config: config,
index: index,
crypto: crypto,
bundle_map: bundle_map,
config,
index,
crypto,
bundle_map,
next_data_bundle: 0,
next_meta_bundle: 0,
bundles: bundles,
bundles,
data_bundle: None,
meta_bundle: None,
lock: lock,
remote_locks: remote_locks,
local_locks: local_locks
lock,
remote_locks,
local_locks
};
if !rebuild_bundle_map {
let mut save_bundle_map = false;
if !gone.is_empty() {
info!("Removig {} old bundles from index", gone.len());
tr_info!("Removing {} old bundles from index", gone.len());
try!(repo.write_mode());
for bundle in gone {
try!(repo.remove_gone_remote_bundle(bundle))
try!(repo.remove_gone_remote_bundle(&bundle))
}
save_bundle_map = true;
}
if !new.is_empty() {
info!("Adding {} new bundles to index", new.len());
tr_info!("Adding {} new bundles to index", new.len());
try!(repo.write_mode());
for bundle in ProgressIter::new(
"adding bundles to index",
tr!("adding bundles to index"),
new.len(),
new.into_iter()
)
{
try!(repo.add_new_remote_bundle(bundle))
try!(repo.add_new_remote_bundle(&bundle))
}
save_bundle_map = true;
}
@ -224,19 +223,19 @@ impl Repository {
key_files: Vec<String>,
) -> Result<Self, RepositoryError> {
let path = path.as_ref();
let mut repo = try!(Repository::create(path, Config::default(), remote));
let mut repo = try!(Repository::create(path, &Config::default(), remote));
for file in key_files {
try!(repo.crypto.lock().unwrap().register_keyfile(file));
}
repo = try!(Repository::open(path));
repo = try!(Repository::open(path, true));
let mut backups: Vec<(String, Backup)> = try!(repo.get_all_backups()).into_iter().collect();
backups.sort_by_key(|&(_, ref b)| b.timestamp);
if let Some((name, backup)) = backups.pop() {
info!("Taking configuration from the last backup '{}'", name);
tr_info!("Taking configuration from the last backup '{}'", name);
repo.config = backup.config;
try!(repo.save_config())
} else {
warn!(
tr_warn!(
"No backup found in the repository to take configuration from, please set the configuration manually."
);
}
@ -250,10 +249,11 @@ impl Repository {
secret: SecretKey,
) -> Result<(), RepositoryError> {
try!(self.write_mode());
Ok(try!(self.crypto.lock().unwrap().register_secret_key(
try!(self.crypto.lock().unwrap().register_secret_key(
public,
secret
)))
));
Ok(())
}
#[inline]
@ -267,7 +267,7 @@ impl Repository {
pub fn set_encryption(&mut self, public: Option<&PublicKey>) {
if let Some(key) = public {
if !self.crypto.lock().unwrap().contains_secret_key(key) {
warn!("The secret key for that public key is not stored in the repository.")
tr_warn!("The secret key for that public key is not stored in the repository.")
}
let mut key_bytes = Vec::new();
key_bytes.extend_from_slice(&key[..]);
@ -338,11 +338,11 @@ impl Repository {
Ok(())
}
fn add_new_remote_bundle(&mut self, bundle: BundleInfo) -> Result<(), RepositoryError> {
fn add_new_remote_bundle(&mut self, bundle: &BundleInfo) -> Result<(), RepositoryError> {
if self.bundle_map.find(&bundle.id).is_some() {
return Ok(());
}
debug!("Adding new bundle to index: {}", bundle.id);
tr_debug!("Adding new bundle to index: {}", bundle.id);
let bundle_id = match bundle.mode {
BundleMode::Data => self.next_data_bundle,
BundleMode::Meta => self.next_meta_bundle,
@ -374,9 +374,9 @@ impl Repository {
Ok(())
}
fn remove_gone_remote_bundle(&mut self, bundle: BundleInfo) -> Result<(), RepositoryError> {
fn remove_gone_remote_bundle(&mut self, bundle: &BundleInfo) -> Result<(), RepositoryError> {
if let Some(id) = self.bundle_map.find(&bundle.id) {
debug!("Removing bundle from index: {}", bundle.id);
tr_debug!("Removing bundle from index: {}", bundle.id);
try!(self.bundles.delete_local_bundle(&bundle.id));
try!(self.index.filter(|_key, data| data.bundle != id));
self.bundle_map.remove(id);
@ -386,7 +386,8 @@ impl Repository {
#[inline]
fn write_mode(&mut self) -> Result<(), RepositoryError> {
Ok(try!(self.local_locks.upgrade(&mut self.lock)))
try!(self.local_locks.upgrade(&mut self.lock));
Ok(())
}
#[inline]
@ -404,7 +405,7 @@ impl Repository {
impl Drop for Repository {
fn drop(&mut self) {
if let Err(err) = self.flush() {
error!("Failed to flush repository: {}", err);
tr_error!("Failed to flush repository: {}", err);
}
}
}

View File

@ -93,7 +93,7 @@ fn inode_from_entry<R: Read>(entry: &mut tar::Entry<R>) -> Result<Inode, Reposit
_ => return Err(InodeError::UnsupportedFiletype(path.to_path_buf()).into()),
};
Inode {
file_type: file_type,
file_type,
name: path.file_name()
.map(|s| s.to_string_lossy().to_string())
.unwrap_or_else(|| "/".to_string()),
@ -177,7 +177,7 @@ impl Repository {
} else {
if let Some(FileData::ChunkedIndirect(ref chunks)) = inode.data {
for &(_, len) in chunks.iter() {
inode.cum_size += len as u64;
inode.cum_size += u64::from(len);
}
}
inode.cum_files = 1;
@ -198,7 +198,7 @@ impl Repository {
Err(RepositoryError::Inode(_)) |
Err(RepositoryError::Chunker(_)) |
Err(RepositoryError::Io(_)) => {
info!("Failed to backup {:?}", path);
tr_info!("Failed to backup {:?}", path);
failed_paths.push(path);
continue;
}
@ -226,7 +226,7 @@ impl Repository {
children.remove(&inode.name);
parent_inode.cum_size += inode.cum_size;
for &(_, len) in chunks.iter() {
parent_inode.cum_size += len as u64;
parent_inode.cum_size += u64::from(len);
}
parent_inode.cum_files += inode.cum_files;
parent_inode.cum_dirs += inode.cum_dirs;
@ -243,7 +243,7 @@ impl Repository {
if roots.len() == 1 {
Ok(roots.pop().unwrap())
} else {
warn!("Tar file contains multiple roots, adding dummy folder");
tr_warn!("Tar file contains multiple roots, adding dummy folder");
let mut root_inode = Inode {
file_type: FileType::Directory,
mode: 0o755,
@ -257,7 +257,7 @@ impl Repository {
for (inode, chunks) in roots {
root_inode.cum_size += inode.cum_size;
for &(_, len) in chunks.iter() {
root_inode.cum_size += len as u64;
root_inode.cum_size += u64::from(len);
}
root_inode.cum_files += inode.cum_files;
root_inode.cum_dirs += inode.cum_dirs;
@ -334,7 +334,8 @@ impl Repository {
str::from_utf8(value).unwrap()
);
}
Ok(try!(tarfile.append_pax_extensions(&pax)))
try!(tarfile.append_pax_extensions(&pax));
Ok(())
}
fn export_tarfile_recurse<W: Write>(

View File

@ -20,11 +20,11 @@ impl Repository {
force: bool,
) -> Result<(), RepositoryError> {
try!(self.flush());
info!("Locking repository");
tr_info!("Locking repository");
try!(self.write_mode());
let _lock = try!(self.lock(true));
// analyze_usage will set the dirty flag
info!("Analyzing chunk usage");
tr_info!("Analyzing chunk usage");
let usage = try!(self.analyze_usage());
let mut data_total = 0;
let mut data_used = 0;
@ -32,7 +32,7 @@ impl Repository {
data_total += bundle.info.encoded_size;
data_used += bundle.get_used_size();
}
info!(
tr_info!(
"Usage: {} of {}, {:.1}%",
to_file_size(data_used as u64),
to_file_size(data_total as u64),
@ -40,10 +40,12 @@ impl Repository {
);
let mut rewrite_bundles = HashSet::new();
let mut reclaim_space = 0;
let mut rewrite_data = 0;
for (id, bundle) in &usage {
if bundle.get_usage_ratio() <= ratio {
rewrite_bundles.insert(*id);
reclaim_space += bundle.get_unused_size();
rewrite_data += bundle.get_used_size();
}
}
if combine {
@ -68,17 +70,18 @@ impl Repository {
}
}
}
info!(
"Reclaiming {} by rewriting {} bundles",
tr_info!(
"Reclaiming about {} by rewriting {} bundles ({})",
to_file_size(reclaim_space as u64),
rewrite_bundles.len()
rewrite_bundles.len(),
to_file_size(rewrite_data as u64)
);
if !force {
self.dirty = false;
return Ok(());
}
for id in ProgressIter::new(
"rewriting bundles",
tr!("rewriting bundles"),
rewrite_bundles.len(),
rewrite_bundles.iter()
)
@ -97,18 +100,20 @@ impl Repository {
}
}
try!(self.flush());
info!("Checking index");
tr_info!("Checking index");
for (hash, location) in self.index.iter() {
if rewrite_bundles.contains(&location.bundle) {
panic!(
let loc_bundle = location.bundle;
let loc_chunk = location.chunk;
if rewrite_bundles.contains(&loc_bundle) {
tr_panic!(
"Removed bundle is still referenced in index: hash:{}, bundle:{}, chunk:{}",
hash,
location.bundle,
location.chunk
loc_bundle,
loc_chunk
);
}
}
info!("Deleting {} bundles", rewrite_bundles.len());
tr_info!("Deleting {} bundles", rewrite_bundles.len());
for id in rewrite_bundles {
try!(self.delete_bundle(id));
}

216
src/translation.rs Normal file
View File

@ -0,0 +1,216 @@
use std::borrow::Cow;
use std::collections::HashMap;
use std::cmp::max;
use std::str;
use std::path::{Path, PathBuf};
use std::io::Read;
use std::fs::File;
use locale_config::Locale;
pub type CowStr = Cow<'static, str>;
fn read_u32(b: &[u8], reorder: bool) -> u32 {
if reorder {
(u32::from(b[0]) << 24) + (u32::from(b[1]) << 16) + (u32::from(b[2]) << 8) + u32::from(b[3])
} else {
(u32::from(b[3]) << 24) + (u32::from(b[2]) << 16) + (u32::from(b[1]) << 8) + u32::from(b[0])
}
}
struct MoFile<'a> {
data: &'a [u8],
count: usize,
orig_pos: usize,
trans_pos: usize,
reorder: bool,
i : usize
}
impl<'a> MoFile<'a> {
fn new(data: &'a [u8]) -> Result<Self, ()> {
if data.len() < 20 {
return Err(());
}
// Magic header
let magic = read_u32(&data[0..4], false);
let reorder = if magic == 0x9504_12de {
false
} else if magic == 0xde12_0495 {
true
} else {
return Err(());
};
// Version
if read_u32(&data[4..8], reorder) != 0x0000_0000 {
return Err(());
}
// Translation count
let count = read_u32(&data[8..12], reorder) as usize;
// Original string offset
let orig_pos = read_u32(&data[12..16], reorder) as usize;
// Original string offset
let trans_pos = read_u32(&data[16..20], reorder) as usize;
if data.len() < max(orig_pos, trans_pos) + count * 8 {
return Err(());
}
Ok(MoFile{
data,
count,
orig_pos,
trans_pos,
reorder,
i: 0
})
}
}
impl<'a> Iterator for MoFile<'a> {
type Item = (&'a str, &'a str);
fn next(&mut self) -> Option<Self::Item> {
if self.i >= self.count {
return None;
}
let length = read_u32(&self.data[self.orig_pos+self.i*8..], self.reorder) as usize;
let offset = read_u32(&self.data[self.orig_pos+self.i*8+4..], self.reorder) as usize;
let orig = match str::from_utf8(&self.data[offset..offset+length]) {
Ok(s) => s,
Err(_) => return None
};
let length = read_u32(&self.data[self.trans_pos+self.i*8..], self.reorder) as usize;
let offset = read_u32(&self.data[self.trans_pos+self.i*8+4..], self.reorder) as usize;
let trans = match str::from_utf8(&self.data[offset..offset+length]) {
Ok(s) => s,
Err(_) => return None
};
self.i += 1;
Some((orig, trans))
}
}
pub struct Translation(HashMap<CowStr, CowStr>);
impl Translation {
pub fn new() -> Self {
Translation(Default::default())
}
pub fn from_mo_data(data: &'static[u8]) -> Self {
let mut translation = Translation::new();
match MoFile::new(data) {
Ok(mo_file) => for (orig, trans) in mo_file {
translation.set(orig, trans);
}
Err(_) => error!("Invalid translation data")
}
translation
}
pub fn from_mo_file(path: &Path) -> Self {
let mut translation = Translation::new();
if let Ok(mut file) = File::open(&path) {
let mut data = vec![];
if file.read_to_end(&mut data).is_ok() {
match MoFile::new(&data) {
Ok(mo_file) => for (orig, trans) in mo_file {
translation.set(orig.to_string(), trans.to_string());
}
Err(_) => error!("Invalid translation data")
}
}
}
translation
}
pub fn set<O: Into<CowStr>, T: Into<CowStr>>(&mut self, orig: O, trans: T) {
let trans = trans.into();
if !trans.is_empty() {
self.0.insert(orig.into(), trans);
}
}
pub fn get<'a, 'b: 'a>(&'b self, orig: &'a str) -> &'a str {
self.0.get(orig).map(|s| s as &'a str).unwrap_or(orig)
}
}
fn get_translation(locale: &str) -> Translation {
if let Some(trans) = find_translation(locale) {
return trans;
}
let country = locale.split('_').next().unwrap();
if let Some(trans) = find_translation(country) {
return trans;
}
Translation::new()
}
fn find_translation(name: &str) -> Option<Translation> {
if EMBEDDED_TRANS.contains_key(name) {
return Some(Translation::from_mo_data(EMBEDDED_TRANS[name]));
}
let path = PathBuf::from(format!("/usr/share/locale/{}/LC_MESSAGES/zvault.mo", name));
if path.exists() {
return Some(Translation::from_mo_file(&path));
}
let path = PathBuf::from(format!("lang/{}.mo", name));
if path.exists() {
return Some(Translation::from_mo_file(&path));
}
None
}
lazy_static! {
pub static ref EMBEDDED_TRANS: HashMap<&'static str, &'static[u8]> = {
HashMap::new()
//map.insert("de", include_bytes!("../lang/de.mo") as &'static [u8]);
};
pub static ref TRANS: Translation = {
let locale = Locale::current();
let locale_str = locale.tags_for("").next().unwrap().as_ref().to_string();
get_translation(&locale_str)
};
}
#[macro_export] macro_rules! tr {
($fmt:tt) => (::translation::TRANS.get($fmt));
}
#[macro_export] macro_rules! tr_format {
($fmt:tt) => (tr!($fmt));
($fmt:tt, $($arg:tt)*) => (rt_format!(tr!($fmt), $($arg)*).expect("invalid format"));
}
#[macro_export] macro_rules! tr_println {
($fmt:tt) => (println!("{}", tr!($fmt)));
($fmt:tt, $($arg:tt)*) => (rt_println!(tr!($fmt), $($arg)*).expect("invalid format"));
}
#[macro_export] macro_rules! tr_trace {
($($arg:tt)*) => (debug!("{}", tr_format!($($arg)*)));
}
#[macro_export] macro_rules! tr_debug {
($($arg:tt)*) => (debug!("{}", tr_format!($($arg)*)));
}
#[macro_export] macro_rules! tr_info {
($($arg:tt)*) => (info!("{}", tr_format!($($arg)*)));
}
#[macro_export] macro_rules! tr_warn {
($($arg:tt)*) => (warn!("{}", tr_format!($($arg)*)));
}
#[macro_export] macro_rules! tr_error {
($($arg:tt)*) => (error!("{}", tr_format!($($arg)*)));
}
#[macro_export] macro_rules! tr_panic {
($($arg:tt)*) => (panic!("{}", tr_format!($($arg)*)));
}

View File

@ -11,7 +11,7 @@ impl Bitmap {
let len = (len + 7) / 8;
let mut bytes = Vec::with_capacity(len);
bytes.resize(len, 0);
Self { bytes: bytes }
Self { bytes }
}
/// Returns the number of bits in the bitmap
@ -67,7 +67,7 @@ impl Bitmap {
#[inline]
pub fn from_bytes(bytes: Vec<u8>) -> Self {
Self { bytes: bytes }
Self { bytes }
}
}

View File

@ -61,7 +61,7 @@ impl ChunkList {
#[inline]
pub fn read_from(src: &[u8]) -> Self {
if src.len() % 20 != 0 {
warn!("Reading truncated chunk list");
tr_warn!("Reading truncated chunk list");
}
ChunkList::read_n_from(src.len() / 20, &mut Cursor::new(src)).unwrap()
}
@ -129,7 +129,7 @@ impl<'a> Deserialize<'a> for ChunkList {
{
let data: Vec<u8> = try!(ByteBuf::deserialize(deserializer)).into();
if data.len() % 20 != 0 {
return Err(D::Error::custom("Invalid chunk list length"));
return Err(D::Error::custom(tr!("Invalid chunk list length")));
}
Ok(
ChunkList::read_n_from(data.len() / 20, &mut Cursor::new(data)).unwrap()

View File

@ -56,9 +56,9 @@ impl<T> ProgressIter<T> {
bar.message(&msg);
bar.set_max_refresh_rate(Some(Duration::from_millis(100)));
ProgressIter {
inner: inner,
bar: bar,
msg: msg
inner,
bar,
msg
}
}
}
@ -73,7 +73,7 @@ impl<T: Iterator> Iterator for ProgressIter<T> {
fn next(&mut self) -> Option<Self::Item> {
match self.inner.next() {
None => {
let msg = self.msg.clone() + "done.";
let msg = self.msg.clone() + tr!("done.");
self.bar.finish_print(&msg);
None
}

View File

@ -3,7 +3,6 @@ use std::ffi::{CStr, CString};
use std::io::{self, Write};
use std::str::FromStr;
use libc;
use squash::*;
@ -11,31 +10,31 @@ quick_error!{
#[derive(Debug)]
pub enum CompressionError {
UnsupportedCodec(name: String) {
description("Unsupported codec")
display("Unsupported codec: {}", name)
description(tr!("Unsupported codec"))
display("{}", tr_format!("Unsupported codec: {}", name))
}
InitializeCodec {
description("Failed to initialize codec")
description(tr!("Failed to initialize codec"))
}
InitializeOptions {
description("Failed to set codec options")
description(tr!("Failed to set codec options"))
}
InitializeStream {
description("Failed to create stream")
description(tr!("Failed to create stream"))
}
Operation(reason: &'static str) {
description("Operation failed")
display("Operation failed: {}", reason)
description(tr!("Operation failed"))
display("{}", tr_format!("Operation failed: {}", reason))
}
Output(err: io::Error) {
from()
cause(err)
description("Failed to write to output")
description(tr!("Failed to write to output"))
}
}
}
#[derive(Clone, Debug, Copy, Eq, PartialEq)]
#[derive(Clone, Debug, Copy, Eq, PartialEq, Hash)]
pub enum CompressionMethod {
Deflate, // Standardized
Brotli, // Good speed and ratio
@ -50,7 +49,7 @@ serde_impl!(CompressionMethod(u8) {
});
#[derive(Clone, Debug, Eq, PartialEq)]
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
pub struct Compression {
method: CompressionMethod,
level: u8
@ -93,8 +92,8 @@ impl Compression {
_ => return Err(CompressionError::UnsupportedCodec(name.to_string())),
};
Ok(Compression {
method: method,
level: level
method,
level
})
}
@ -234,7 +233,7 @@ impl CompressionStream {
#[inline]
fn new(stream: *mut SquashStream) -> Self {
CompressionStream {
stream: stream,
stream,
buffer: [0; 16 * 1024]
}
}
@ -285,6 +284,8 @@ impl CompressionStream {
impl Drop for CompressionStream {
fn drop(&mut self) {
unsafe {
//squash_object_unref(self.stream as *mut ::std::os::raw::c_void);
use libc;
squash_object_unref(self.stream as *mut libc::c_void);
}
}

View File

@ -21,7 +21,7 @@ static INIT: Once = ONCE_INIT;
fn sodium_init() {
INIT.call_once(|| if !sodiumoxide::init() {
panic!("Failed to initialize sodiumoxide");
tr_panic!("Failed to initialize sodiumoxide");
});
}
@ -29,27 +29,27 @@ quick_error!{
#[derive(Debug)]
pub enum EncryptionError {
InvalidKey {
description("Invalid key")
description(tr!("Invalid key"))
}
MissingKey(key: PublicKey) {
description("Missing key")
display("Missing key: {}", to_hex(&key[..]))
description(tr!("Missing key"))
display("{}", tr_format!("Missing key: {}", to_hex(&key[..])))
}
Operation(reason: &'static str) {
description("Operation failed")
display("Operation failed: {}", reason)
description(tr!("Operation failed"))
display("{}", tr_format!("Operation failed: {}", reason))
}
Io(err: io::Error) {
from()
cause(err)
description("IO error")
display("IO error: {}", err)
description(tr!("IO error"))
display("{}", tr_format!("IO error: {}", err))
}
Yaml(err: serde_yaml::Error) {
from()
cause(err)
description("Yaml format error")
display("Yaml format error: {}", err)
description(tr!("Yaml format error"))
display("{}", tr_format!("Yaml format error: {}", err))
}
}
}
@ -68,7 +68,7 @@ impl EncryptionMethod {
pub fn from_string(val: &str) -> Result<Self, &'static str> {
match val {
"sodium" => Ok(EncryptionMethod::Sodium),
_ => Err("Unsupported encryption method"),
_ => Err(tr!("Unsupported encryption method")),
}
}
@ -108,7 +108,8 @@ impl KeyfileYaml {
pub fn save<P: AsRef<Path>>(&self, path: P) -> Result<(), EncryptionError> {
let mut f = try!(File::create(path));
Ok(try!(serde_yaml::to_writer(&mut f, &self)))
try!(serde_yaml::to_writer(&mut f, &self));
Ok(())
}
}
@ -151,7 +152,7 @@ impl Crypto {
}
Ok(Crypto {
path: Some(path),
keys: keys
keys
})
}
@ -254,7 +255,7 @@ impl Crypto {
match *method {
EncryptionMethod::Sodium => {
sealedbox::open(data, &public, secret).map_err(|_| {
EncryptionError::Operation("Decryption failed")
EncryptionError::Operation(tr!("Decryption failed"))
})
}
}
@ -284,7 +285,7 @@ impl Crypto {
let mut pk = [0u8; 32];
let mut sk = [0u8; 32];
if unsafe { libsodium_sys::crypto_box_seed_keypair(&mut pk, &mut sk, &seed) } != 0 {
panic!("Libsodium failed");
tr_panic!("Libsodium failed");
}
(
PublicKey::from_slice(&pk).unwrap(),

View File

@ -12,7 +12,6 @@ use std::u64;
use std::io::{self, Read, Write};
#[repr(packed)]
#[derive(Clone, Copy, PartialEq, Hash, Eq, Default, Ord, PartialOrd)]
pub struct Hash {
pub high: u64,
@ -46,8 +45,8 @@ impl Hash {
let high = try!(src.read_u64::<LittleEndian>());
let low = try!(src.read_u64::<LittleEndian>());
Ok(Hash {
high: high,
low: low
high,
low
})
}
@ -56,8 +55,8 @@ impl Hash {
let high = try!(u64::from_str_radix(&val[..16], 16).map_err(|_| ()));
let low = try!(u64::from_str_radix(&val[16..], 16).map_err(|_| ()));
Ok(Self {
high: high,
low: low
high,
low
})
}
}
@ -96,7 +95,7 @@ impl<'a> Deserialize<'a> for Hash {
{
let dat: Vec<u8> = try!(ByteBuf::deserialize(deserializer)).into();
if dat.len() != 16 {
return Err(D::Error::custom("Invalid key length"));
return Err(D::Error::custom(tr!("Invalid key length")));
}
Ok(Hash {
high: LittleEndian::read_u64(&dat[..8]),
@ -106,7 +105,7 @@ impl<'a> Deserialize<'a> for Hash {
}
#[derive(Debug, Clone, Copy, Eq, PartialEq)]
#[derive(Debug, Clone, Copy, Eq, PartialEq, Hash)]
pub enum HashMethod {
Blake2,
Murmur3
@ -142,7 +141,7 @@ impl HashMethod {
match name {
"blake2" => Ok(HashMethod::Blake2),
"murmur3" => Ok(HashMethod::Murmur3),
_ => Err("Unsupported hash method"),
_ => Err(tr!("Unsupported hash method")),
}
}

View File

@ -15,22 +15,22 @@ quick_error!{
Io(err: io::Error) {
from()
cause(err)
description("IO error")
display("Lock error: IO error\n\tcaused by: {}", err)
description(tr!("IO error"))
display("{}", tr_format!("Lock error: IO error\n\tcaused by: {}", err))
}
Yaml(err: serde_yaml::Error) {
from()
cause(err)
description("Yaml format error")
display("Lock error: yaml format error\n\tcaused by: {}", err)
description(tr!("Yaml format error"))
display("{}", tr_format!("Lock error: yaml format error\n\tcaused by: {}", err))
}
InvalidLockState(reason: &'static str) {
description("Invalid lock state")
display("Lock error: invalid lock state: {}", reason)
description(tr!("Invalid lock state"))
display("{}", tr_format!("Lock error: invalid lock state: {}", reason))
}
Locked {
description("Locked")
display("Lock error: locked")
description(tr!("Locked"))
display("{}", tr_format!("Lock error: locked"))
}
}
}
@ -58,7 +58,8 @@ impl LockFile {
pub fn save<P: AsRef<Path>>(&self, path: P) -> Result<(), LockError> {
let mut f = try!(File::create(path));
Ok(try!(serde_yaml::to_writer(&mut f, &self)))
try!(serde_yaml::to_writer(&mut f, &self));
Ok(())
}
}
@ -121,13 +122,13 @@ impl LockFolder {
for lock in try!(self.get_locks()) {
if lock.exclusive {
if level == LockLevel::Exclusive {
return Err(LockError::InvalidLockState("multiple exclusive locks"));
return Err(LockError::InvalidLockState(tr!("multiple exclusive locks")));
} else {
level = LockLevel::Exclusive
}
} else if level == LockLevel::Exclusive {
return Err(LockError::InvalidLockState(
"exclusive lock and shared locks"
tr!("exclusive lock and shared locks")
));
} else {
level = LockLevel::Shared
@ -145,7 +146,7 @@ impl LockFolder {
hostname: get_hostname().unwrap(),
processid: unsafe { libc::getpid() } as usize,
date: Utc::now().timestamp(),
exclusive: exclusive
exclusive
};
let path = self.path.join(format!(
"{}-{}.lock",
@ -155,7 +156,7 @@ impl LockFolder {
try!(lockfile.save(&path));
let handle = LockHandle {
lock: lockfile,
path: path
path
};
if self.get_lock_level().is_err() {
try!(handle.release());

View File

@ -15,8 +15,8 @@ impl<K: Eq + Hash, V> LruCache<K, V> {
pub fn new(min_size: usize, max_size: usize) -> Self {
LruCache {
items: HashMap::default(),
min_size: min_size,
max_size: max_size,
min_size,
max_size,
next: 0
}
}

View File

@ -9,6 +9,7 @@ mod cli;
mod hostname;
mod fs;
mod lock;
mod statistics;
pub mod msgpack;
pub use self::fs::*;
@ -22,3 +23,4 @@ pub use self::hex::*;
pub use self::cli::*;
pub use self::hostname::*;
pub use self::lock::*;
pub use self::statistics::*;

57
src/util/statistics.rs Normal file
View File

@ -0,0 +1,57 @@
#[derive(Debug, Default)]
pub struct ValueStats {
pub min: f32,
pub max: f32,
pub avg: f32,
pub stddev: f32,
pub count: usize,
pub count_xs: usize,
pub count_s: usize,
pub count_m: usize,
pub count_l: usize,
pub count_xl: usize,
}
impl ValueStats {
pub fn from_iter<T: Iterator<Item=f32>, F: Fn() -> T>(iter: F) -> ValueStats {
let mut stats = ValueStats::default();
stats.min = ::std::f32::INFINITY;
let mut sum = 0.0f64;
for val in iter() {
if stats.min > val {
stats.min = val;
}
if stats.max < val {
stats.max = val;
}
sum += f64::from(val);
stats.count += 1;
}
stats.avg = (sum as f32) / (stats.count as f32);
if stats.count < 2 {
stats.count_m = stats.count;
return stats;
}
sum = 0.0;
for val in iter() {
sum += f64::from(val - stats.avg) * f64::from(val - stats.avg);
}
stats.stddev = ((sum as f32)/(stats.count as f32-1.0)).sqrt();
for val in iter() {
if val < stats.avg - 2.0 * stats.stddev {
stats.count_xs += 1;
} else if val < stats.avg - stats.stddev {
stats.count_s += 1;
} else if val < stats.avg + stats.stddev {
stats.count_m += 1;
} else if val < stats.avg + 2.0 * stats.stddev {
stats.count_l += 1;
} else {
stats.count_xl += 1;
}
}
stats
}
}

78
test.sh
View File

@ -2,58 +2,60 @@ set -ex
rm -rf repos
mkdir repos
target/release/zvault init --compression brotli/3 repos/zvault_brotli3
target/release/zvault init --compression brotli/6 repos/zvault_brotli6
target/release/zvault init --compression lzma2/2 repos/zvault_lzma2
mkdir -p repos/remotes/zvault_brotli3 repos/remotes/zvault_brotli6 repos/remotes/zvault_lzma2
target/release/zvault init --compression brotli/3 --remote $(pwd)/repos/remotes/zvault_brotli3 $(pwd)/repos/zvault_brotli3
target/release/zvault init --compression brotli/6 --remote $(pwd)/repos/remotes/zvault_brotli6 $(pwd)/repos/zvault_brotli6
target/release/zvault init --compression lzma2/2 --remote $(pwd)/repos/remotes/zvault_lzma2 $(pwd)/repos/zvault_lzma2
attic init repos/attic
borg init -e none repos/borg
borg init -e none repos/borg-zlib
zbackup init --non-encrypted repos/zbackup
cat < test_data/silesia.tar > /dev/null
time target/release/zvault backup repos/zvault_brotli3::silesia1 test_data/silesia.tar
time target/release/zvault backup repos/zvault_brotli3::silesia2 test_data/silesia.tar
time target/release/zvault backup repos/zvault_brotli6::silesia1 test_data/silesia.tar
time target/release/zvault backup repos/zvault_brotli6::silesia2 test_data/silesia.tar
time target/release/zvault backup repos/zvault_lzma2::silesia1 test_data/silesia.tar
time target/release/zvault backup repos/zvault_lzma2::silesia2 test_data/silesia.tar
time attic create repos/attic::silesia1 test_data/silesia.tar
time attic create repos/attic::silesia2 test_data/silesia.tar
time borg create -C none repos/borg::silesia1 test_data/silesia.tar
time borg create -C none repos/borg::silesia2 test_data/silesia.tar
time borg create -C zlib repos/borg-zlib::silesia1 test_data/silesia.tar
time borg create -C zlib repos/borg-zlib::silesia2 test_data/silesia.tar
time zbackup backup --non-encrypted repos/zbackup/backups/silesia1 < test_data/silesia.tar
time zbackup backup --non-encrypted repos/zbackup/backups/silesia2 < test_data/silesia.tar
find test_data/silesia -type f | xargs cat > /dev/null
time target/release/zvault backup test_data/silesia $(pwd)/repos/zvault_brotli3::silesia1
time target/release/zvault backup test_data/silesia $(pwd)/repos/zvault_brotli3::silesia2
time target/release/zvault backup test_data/silesia $(pwd)/repos/zvault_brotli6::silesia1
time target/release/zvault backup test_data/silesia $(pwd)/repos/zvault_brotli6::silesia2
time target/release/zvault backup test_data/silesia $(pwd)/repos/zvault_lzma2::silesia1
time target/release/zvault backup test_data/silesia $(pwd)/repos/zvault_lzma2::silesia2
time attic create repos/attic::silesia1 test_data/silesia
time attic create repos/attic::silesia2 test_data/silesia
time borg create -C none repos/borg::silesia1 test_data/silesia
time borg create -C none repos/borg::silesia2 test_data/silesia
time borg create -C zlib repos/borg-zlib::silesia1 test_data/silesia
time borg create -C zlib repos/borg-zlib::silesia2 test_data/silesia
time tar -c test_data/silesia | zbackup backup --non-encrypted repos/zbackup/backups/silesia1
time tar -c test_data/silesia | zbackup backup --non-encrypted repos/zbackup/backups/silesia2
du -h test_data/silesia.tar
du -sh repos/zvault*/bundles repos/attic repos/borg repos/borg-zlib repos/zbackup
du -sh repos/remotes/zvault* repos/attic repos/borg repos/borg-zlib repos/zbackup
rm -rf repos
mkdir repos
target/release/zvault init --compression brotli/3 repos/zvault_brotli3
target/release/zvault init --compression brotli/6 repos/zvault_brotli6
target/release/zvault init --compression lzma2/2 repos/zvault_lzma2
mkdir -p repos/remotes/zvault_brotli3 repos/remotes/zvault_brotli6 repos/remotes/zvault_lzma2
target/release/zvault init --compression brotli/3 --remote $(pwd)/repos/remotes/zvault_brotli3 $(pwd)/repos/zvault_brotli3
target/release/zvault init --compression brotli/6 --remote $(pwd)/repos/remotes/zvault_brotli6 $(pwd)/repos/zvault_brotli6
target/release/zvault init --compression lzma2/2 --remote $(pwd)/repos/remotes/zvault_lzma2 $(pwd)/repos/zvault_lzma2
attic init repos/attic
borg init -e none repos/borg
borg init -e none repos/borg-zlib
zbackup init --non-encrypted repos/zbackup
cat < test_data/ubuntu.tar > /dev/null
time target/release/zvault backup repos/zvault_brotli3::ubuntu1 test_data/ubuntu.tar
time target/release/zvault backup repos/zvault_brotli3::ubuntu2 test_data/ubuntu.tar
time target/release/zvault backup repos/zvault_brotli6::ubuntu1 test_data/ubuntu.tar
time target/release/zvault backup repos/zvault_brotli6::ubuntu2 test_data/ubuntu.tar
time target/release/zvault backup repos/zvault_lzma2::ubuntu1 test_data/ubuntu.tar
time target/release/zvault backup repos/zvault_lzma2::ubuntu2 test_data/ubuntu.tar
time attic create repos/attic::ubuntu1 test_data/ubuntu.tar
time attic create repos/attic::ubuntu2 test_data/ubuntu.tar
time borg create -C none repos/borg::ubuntu1 test_data/ubuntu.tar
time borg create -C none repos/borg::ubuntu2 test_data/ubuntu.tar
time borg create -C zlib repos/borg-zlib::ubuntu1 test_data/ubuntu.tar
time borg create -C zlib repos/borg-zlib::ubuntu2 test_data/ubuntu.tar
time zbackup backup --non-encrypted repos/zbackup/backups/ubuntu1 < test_data/ubuntu.tar
time zbackup backup --non-encrypted repos/zbackup/backups/ubuntu2 < test_data/ubuntu.tar
find test_data/ubuntu -type f | xargs cat > /dev/null
time target/release/zvault backup test_data/ubuntu $(pwd)/repos/zvault_brotli3::ubuntu1
time target/release/zvault backup test_data/ubuntu $(pwd)/repos/zvault_brotli3::ubuntu2
time target/release/zvault backup test_data/ubuntu $(pwd)/repos/zvault_brotli6::ubuntu1
time target/release/zvault backup test_data/ubuntu $(pwd)/repos/zvault_brotli6::ubuntu2
time target/release/zvault backup test_data/ubuntu $(pwd)/repos/zvault_lzma2::ubuntu1
time target/release/zvault backup test_data/ubuntu $(pwd)/repos/zvault_lzma2::ubuntu2
time attic create repos/attic::ubuntu1 test_data/ubuntu
time attic create repos/attic::ubuntu2 test_data/ubuntu
time borg create -C none repos/borg::ubuntu1 test_data/ubuntu
time borg create -C none repos/borg::ubuntu2 test_data/ubuntu
time borg create -C zlib repos/borg-zlib::ubuntu1 test_data/ubuntu
time borg create -C zlib repos/borg-zlib::ubuntu2 test_data/ubuntu
time tar -c test_data/ubuntu | zbackup backup --non-encrypted repos/zbackup/backups/ubuntu1
time tar -c test_data/ubuntu | zbackup backup --non-encrypted repos/zbackup/backups/ubuntu2
du -h test_data/ubuntu.tar
du -sh repos/zvault*/bundles repos/attic repos/borg repos/borg-zlib repos/zbackup
du -sh repos/remotes/zvault* repos/attic repos/borg repos/borg-zlib repos/zbackup