mirror of https://github.com/dswd/zvault
Some changes
This commit is contained in:
parent
7b79456ec3
commit
336cc97fea
61
README.md
61
README.md
|
@ -1,4 +1,4 @@
|
|||
# ZVault Backup solution
|
||||
# zVault Backup Solution
|
||||
zVault is a highly efficient deduplicating backup solution that supports
|
||||
client-side encryption, compression and remote storage of backup data.
|
||||
|
||||
|
@ -83,6 +83,65 @@ Backups can be mounted as a user-space filesystem to investigate and restore
|
|||
their contents. Once mounted, graphical programs like file managers can be used
|
||||
to work on the backup data and find the needed files.
|
||||
|
||||
|
||||
## Example usage
|
||||
|
||||
As an example, I am going to backup my projects folder. To do that, I am
|
||||
initializing an encrypted zVault repository, storing the data on a remote
|
||||
filesystem which has been mounted on `/mnt/backup`.
|
||||
|
||||
#$> zvault init --encrypt --remote /mnt/backup
|
||||
public: 2bea1d15...
|
||||
secret: 3698a88c...
|
||||
|
||||
Bundle size: 25.0 MiB
|
||||
Chunker: fastcdc/16
|
||||
Compression: brotli/3
|
||||
Encryption: 2bea1d15...
|
||||
Hash method: blake2
|
||||
|
||||
The repository has been created and zVault has generated as new key pair for me.
|
||||
I should now store this key pair in a safe location before I continue.
|
||||
|
||||
Now I can backup my home directory to the repository.
|
||||
|
||||
#$> zvault backup /home/dswd/projects ::projects1
|
||||
info: No reference backup found, doing a full scan instead
|
||||
Modified: false
|
||||
Date: Thu, 6 Apr 2017 12:29:52 +0200
|
||||
Source: capanord:/home/dswd/projects
|
||||
Duration: 0:01:59.5
|
||||
Entries: 29205 files, 9535 dirs
|
||||
Total backup size: 5.4 GiB
|
||||
Modified data size: 5.4 GiB
|
||||
Deduplicated size: 3.2 GiB, 41.8% saved
|
||||
Compressed size: 1.1 GiB in 48 bundles, 63.9% saved
|
||||
Chunk count: 220410, avg size: 15.0 KiB
|
||||
|
||||
The backup run took about 2 minutes and by looking at the data, I see that
|
||||
deduplication saved over 40% and compression again saved over 60% so that in the
|
||||
end my backup only uses 1.1 GiB out of 5.4 GiB.
|
||||
|
||||
After some work, I create another backup.
|
||||
|
||||
#$> zvault backup /home/dswd/projects ::projects2
|
||||
info: Using backup projekte1 as reference
|
||||
Modified: false
|
||||
Date: Thu, 6 Apr 2017 13:28:54 +0200
|
||||
Source: capanord:/home/dswd/projects
|
||||
Duration: 0:00:07.9
|
||||
Entries: 29205 files, 9535 dirs
|
||||
Total backup size: 5.4 GiB
|
||||
Modified data size: 24.9 MiB
|
||||
Deduplicated size: 10.6 MiB, 57.3% saved
|
||||
Compressed size: 4.7 MiB in 2 bundles, 55.7% saved
|
||||
Chunk count: 35507, avg size: 313 Bytes
|
||||
|
||||
This time, the backup run only took about 8 seconds as zVault skipped most of
|
||||
the folder because it was unchanged. The backup only stored 4.7 MiB of data.
|
||||
This shows the true potential of deduplication.
|
||||
|
||||
|
||||
### Semantic Versioning
|
||||
zVault sticks to the semantic versioning scheme. In its current pre-1.0 stage
|
||||
this has the following implications:
|
||||
|
|
|
@ -45,7 +45,7 @@ configuration can be changed by _zvault-config(1)_ later.
|
|||
values.
|
||||
|
||||
|
||||
* `-e`, `--encryption`:
|
||||
* `-e`, `--encrypt`:
|
||||
|
||||
Generate a keypair and enable encryption.
|
||||
Please see _zvault(1)_ for more information on *encryption*.
|
||||
|
|
|
@ -241,7 +241,7 @@ pub fn parse() -> Result<Arguments, ErrorCode> {
|
|||
.arg(Arg::from_usage("bundle_size --bundle-size [SIZE] 'Set the target bundle size in MiB (default: 25)'"))
|
||||
.arg(Arg::from_usage("--chunker [CHUNKER] 'Set the chunker algorithm and target chunk size (default: fastcdc/16)'"))
|
||||
.arg(Arg::from_usage("-c --compression [COMPRESSION] 'Set the compression method and level (default: brotli/3)'"))
|
||||
.arg(Arg::from_usage("-e --encryption 'Generate a keypair and enable encryption'"))
|
||||
.arg(Arg::from_usage("-e --encrypt 'Generate a keypair and enable encryption'"))
|
||||
.arg(Arg::from_usage("--hash [HASH] 'Set the hash method (default: blake2)'"))
|
||||
.arg(Arg::from_usage("-r --remote <REMOTE> 'Set the path to the mounted remote storage'"))
|
||||
.arg(Arg::from_usage("[REPO] 'The path for the new repository'")))
|
||||
|
@ -327,7 +327,7 @@ pub fn parse() -> Result<Arguments, ErrorCode> {
|
|||
bundle_size: (try!(parse_num(args.value_of("bundle_size").unwrap_or(&DEFAULT_BUNDLE_SIZE.to_string()), "Bundle size")) * 1024 * 1024) as usize,
|
||||
chunker: try!(parse_chunker(args.value_of("chunker").unwrap_or(DEFAULT_CHUNKER))),
|
||||
compression: try!(parse_compression(args.value_of("compression").unwrap_or(DEFAULT_COMPRESSION))),
|
||||
encryption: args.is_present("encryption"),
|
||||
encryption: args.is_present("encrypt"),
|
||||
hash: try!(parse_hash(args.value_of("hash").unwrap_or(DEFAULT_HASH))),
|
||||
repo_path: repository.to_string(),
|
||||
remote_path: args.value_of("remote").unwrap().to_string()
|
||||
|
|
|
@ -153,7 +153,7 @@ impl Repository {
|
|||
) -> Result<Inode, RepositoryError> {
|
||||
let path = path.as_ref();
|
||||
let mut inode = try!(self.create_inode(path, reference));
|
||||
let meta_size = 1000; // add 1000 for encoded metadata
|
||||
let meta_size = inode.estimate_meta_size();
|
||||
inode.cum_size = inode.size + meta_size;
|
||||
if let Some(ref_inode) = reference {
|
||||
if !ref_inode.is_same_meta_quick(&inode) {
|
||||
|
|
|
@ -265,6 +265,11 @@ impl Inode {
|
|||
pub fn decode(data: &[u8]) -> Result<Self, InodeError> {
|
||||
Ok(try!(msgpack::decode(&data)))
|
||||
}
|
||||
|
||||
#[inline]
|
||||
pub fn estimate_meta_size(&self) -> u64 {
|
||||
1000
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -68,7 +68,7 @@ impl Repository {
|
|||
let path = try!(entry.path()).to_path_buf();
|
||||
match self.import_tar_entry(&mut entry) {
|
||||
Ok(mut inode) => {
|
||||
inode.cum_size = inode.size + 1000;
|
||||
inode.cum_size = inode.size + inode.estimate_meta_size();
|
||||
if inode.file_type == FileType::Directory {
|
||||
inode.cum_dirs = 1;
|
||||
} else {
|
||||
|
|
Loading…
Reference in New Issue