Welcome to duplicity’s documentation!
duplicity
duplicity package
Subpackages
duplicity.backends package
Submodules
duplicity.backends._boto_multi module
duplicity.backends._boto_single module
- class duplicity.backends._boto_single.BotoBackend(parsed_url)[source]
Bases:
Backend
Backend for Amazon’s Simple Storage System, (aka Amazon S3), though the use of the boto module, (http://code.google.com/p/boto/).
To make use of this backend you must set aws_access_key_id and aws_secret_access_key in your ~/.boto or /etc/boto.cfg with your Amazon Web Services key id and secret respectively. Alternatively you can export the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
duplicity.backends._cf_cloudfiles module
duplicity.backends._cf_pyrax module
duplicity.backends.adbackend module
- class duplicity.backends.adbackend.ADBackend(parsed_url)[source]
Bases:
Backend
Backend for Amazon Drive. It communicates directly with Amazon Drive using their RESTful API and does not rely on externally setup software (like acd_cli).
- CLIENT_ID = 'amzn1.application-oa2-client.791c9c2d78444e85a32eb66f92eb6bcc'
- CLIENT_SECRET = '5b322c6a37b25f16d848a6a556eddcc30314fc46ae65c87068ff1bc4588d715b'
- MULTIPART_BOUNDARY = 'DuplicityFormBoundaryd66364f7f8924f7e9d478e19cf4b871d114a1e00262542'
- OAUTH_AUTHORIZE_URL = 'https://www.amazon.com/ap/oa'
- OAUTH_REDIRECT_URL = 'https://breunig.xyz/duplicity/copy.html'
- OAUTH_SCOPE = ['clouddrive:read_other', 'clouddrive:write']
- OAUTH_TOKEN_PATH = '/home/docs/.duplicity_ad_oauthtoken.json'
- OAUTH_TOKEN_URL = 'https://api.amazon.com/auth/o2/token'
- multipart_stream(metadata, source_path)[source]
Generator for multipart/form-data file upload from source file
duplicity.backends.azurebackend module
- class duplicity.backends.azurebackend.AzureBackend(parsed_url)[source]
Bases:
Backend
Backend for Azure Blob Storage Service
- duplicity.backends.azurebackend._is_valid_container_name(name)[source]
Check, whether the given name conforms to the rules as defined in https://docs.microsoft.com/en-us/rest/api/storageservices/naming-and-referencing-containers–blobs–and-metadata for valid names.
duplicity.backends.b2backend module
duplicity.backends.boxbackend module
duplicity.backends.cfbackend module
duplicity.backends.dpbxbackend module
- class duplicity.backends.dpbxbackend.DPBXBackend(parsed_url)[source]
Bases:
Backend
Connect to remote store using Dr*pB*x service
duplicity.backends.gdocsbackend module
duplicity.backends.gdrivebackend module
duplicity.backends.giobackend module
- class duplicity.backends.giobackend.GIOBackend(parsed_url)[source]
Bases:
Backend
Use this backend when saving to a GIO URL. This is a bit of a meta-backend, in that it can handle multiple schemas. URLs look like schema://user@server/path.
- __copy_file(source, target)
- __copy_progress(*args, **kwargs)
- __done_with_mount(fileobj, result, loop)
duplicity.backends.hsibackend module
duplicity.backends.hubicbackend module
- class duplicity.backends.hubicbackend.HubicBackend(parsed_url)[source]
Bases:
PyraxBackend
Backend for Hubic using Pyrax
duplicity.backends.idrivedbackend module
duplicity.backends.imapbackend module
duplicity.backends.jottacloudbackend module
- class duplicity.backends.jottacloudbackend.JottaCloudBackend(parsed_url)[source]
Bases:
Backend
Connect to remote store using JottaCloud API
duplicity.backends.lftpbackend module
duplicity.backends.localbackend module
- class duplicity.backends.localbackend.LocalBackend(parsed_url)[source]
Bases:
Backend
Use this backend when saving to local disk
Urls look like file://testfiles/output. Relative to root can be gotten with extra slash (file:///usr/local).
duplicity.backends.mediafirebackend module
MediaFire Duplicity Backend
duplicity.backends.megabackend module
- class duplicity.backends.megabackend.MegaBackend(parsed_url)[source]
Bases:
Backend
Connect to remote store using Mega.co.nz API
- _makedir_recursive(path)[source]
creates a remote directory (recursively the whole path), ingores errors
- _put(source_path, remote_filename)[source]
uploads file to Mega (deletes it first, to ensure it does not exist)
duplicity.backends.megav2backend module
- class duplicity.backends.megav2backend.Megav2Backend(parsed_url)[source]
Bases:
Backend
Backend for MEGA.nz cloud storage, only one that works for accounts created since Nov. 2018 See https://github.com/megous/megatools/issues/411 for more details
This MEGA backend resorts to official tools (MEGAcmd) as available at https://mega.nz/cmd MEGAcmd works through a single binary called “mega-cmd”, which talks to a backend server “mega-cmd-server”, which keeps state (for example, persisting a session). Multiple “mega-*” shell wrappers (ie. “mega-ls”) exist as the user interface to “mega-cmd” and MEGA API The full MEGAcmd User Guide can be found in the software’s GitHub page below : https://github.com/meganz/MEGAcmd/blob/master/UserGuide.md
- _check_binary_exists(cmd)[source]
Checks that a specified command exists in the running user command path
- _put(source_path, remote_filename)[source]
Uploads file to the specified remote folder (tries to delete it first to make sure the new one can be uploaded)
- folder_contents(files_only=False)[source]
Lists contents of a remote MEGA path, optionally ignoring subdirectories
duplicity.backends.megav3backend module
- class duplicity.backends.megav3backend.Megav3Backend(parsed_url)[source]
Bases:
Backend
Backend for MEGA.nz cloud storage, only one that works for accounts created since Nov. 2018 See https://github.com/megous/megatools/issues/411 for more details
This MEGA backend resorts to official tools (MEGAcmd) as available at https://mega.nz/cmd MEGAcmd works through a single binary called “mega-cmd”, which keeps state (for example, persisting a session). Multiple “mega-*” shell wrappers (ie. “mega-ls”) exist as the user interface to “mega-cmd” and MEGA API The full MEGAcmd User Guide can be found in the software’s GitHub page below : https://github.com/meganz/MEGAcmd/blob/master/UserGuide.md
- _check_binary_exists(cmd)[source]
Checks that a specified command exists in the running user command path
- _put(source_path, remote_filename)[source]
Uploads file to the specified remote folder (tries to delete it first to make sure the new one can be uploaded)
duplicity.backends.multibackend module
- class duplicity.backends.multibackend.MultiBackend(parsed_url)[source]
Bases:
Backend
Store files across multiple remote stores. URL is a path to a local file containing URLs/other config defining the remote store
- __affinities = {}
- __knownQueryParameters = frozenset({'mode', 'onfail', 'subpath'})
- __mode = 'stripe'
- __mode_allowedSet = frozenset({'mirror', 'stripe'})
- __onfail_mode = 'continue'
- __onfail_mode_allowedSet = frozenset({'abort', 'continue'})
- __stores = []
- __subpath = ''
- __write_cursor = 0
duplicity.backends.ncftpbackend module
duplicity.backends.onedrivebackend module
- class duplicity.backends.onedrivebackend.DefaultOAuth2Session(api_uri)[source]
Bases:
OneDriveOAuth2Session
A possibly-interactive console session using a built-in API key
- CLIENT_ID = '000000004C12E85D'
- OAUTH_AUTHORIZE_URI = 'https://login.live.com/oauth20_authorize.srf'
- OAUTH_REDIRECT_URI = 'https://login.live.com/oauth20_desktop.srf'
- OAUTH_SCOPE = ['Files.Read', 'Files.ReadWrite', 'User.Read', 'offline_access']
- OAUTH_TOKEN_PATH = '/home/docs/.duplicity_onedrive_oauthtoken.json'
- class duplicity.backends.onedrivebackend.ExternalOAuth2Session(client_id, refresh_token)[source]
Bases:
OneDriveOAuth2Session
Caller is managing tokens and provides an active refresh token.
- class duplicity.backends.onedrivebackend.OneDriveBackend(parsed_url)[source]
Bases:
Backend
Uses Microsoft OneDrive (formerly SkyDrive) for backups.
- API_URI = 'https://graph.microsoft.com/v1.0/'
- REQUIRED_FRAGMENT_SIZE_MULTIPLE = 327680
duplicity.backends.par2backend module
- class duplicity.backends.par2backend.Par2Backend(parsed_url)[source]
Bases:
Backend
This backend wrap around other backends and create Par2 recovery files before the file and the Par2 files are transfered with the wrapped backend.
If a received file is corrupt it will try to repair it on the fly.
- delete_list(filename_list)[source]
delete given filename_list and all .par2 files that belong to them
- get(remote_filename, local_path)[source]
transfer remote_filename and the related .par2 file into a temp-dir. remote_filename will be renamed into local_path before finishing.
If “par2 verify” detect an error transfer the Par2-volumes into the temp-dir and try to repair.
- list()[source]
Return list of filenames (byte strings) present in backend
Files ending with “.par2” will be excluded from the list.
- transfer(method, source_path, remote_filename)[source]
create Par2 files and transfer the given file and the Par2 files with the wrapped backend.
Par2 must run on the real filename or it would restore the temp-filename later on. So first of all create a tempdir and symlink the soure_path with remote_filename into this.
duplicity.backends.pcabackend module
- class duplicity.backends.pcabackend.PCABackend(parsed_url)[source]
Bases:
Backend
Backend for OVH PCA
- __list_objs(ffilter=None)
duplicity.backends.pydrivebackend module
duplicity.backends.rclonebackend module
duplicity.backends.rsyncbackend module
- class duplicity.backends.rsyncbackend.RsyncBackend(parsed_url)[source]
Bases:
Backend
Connect to remote store using rsync
rsync backend contributed by Sebastian Wilhelmi <seppi@seppi.de> rsyncd auth, alternate port support Copyright 2010 by Edgar Soldin <edgar.soldin@web.de>
duplicity.backends.s3_boto3_backend module
- class duplicity.backends.s3_boto3_backend.S3Boto3Backend(parsed_url)[source]
Bases:
Backend
Backend for Amazon’s Simple Storage System, (aka Amazon S3), though the use of the boto3 module. (See https://boto3.amazonaws.com/v1/documentation/api/latest/index.html for information on boto3.)
Pursuant to Amazon’s announced deprecation of path style S3 access, this backend only supports virtual host style bucket URIs. See the man page for full details.
To make use of this backend, you must provide AWS credentials. This may be done in several ways: through the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, by the ~/.aws/credentials file, by the ~/.aws/config file, or by using the boto2 style ~/.boto or /etc/boto.cfg files.
duplicity.backends.s3_boto_backend module
duplicity.backends.slatebackend module
duplicity.backends.ssh_paramiko_backend module
- class duplicity.backends.ssh_paramiko_backend.SSHParamikoBackend(parsed_url)[source]
Bases:
Backend
This backend accesses files using the sftp or scp protocols. It does not need any local client programs, but an ssh server and the sftp program must be installed on the remote side (or with scp, the programs scp, ls, mkdir, rm and a POSIX-compliant shell).
Authentication keys are requested from an ssh agent if present, then ~/.ssh/id_rsa/dsa are tried. If -oIdentityFile=path is present in –ssh-options, then that file is also tried. The passphrase for any of these keys is taken from the URI or FTP_PASSWORD. If none of the above are available, password authentication is attempted (using the URI or FTP_PASSWORD).
Missing directories on the remote side will be created.
If scp is active then all operations on the remote side require passing arguments through a shell, which introduces unavoidable quoting issues: directory and file names that contain single quotes will not work. This problem does not exist with sftp.
duplicity.backends.ssh_pexpect_backend module
duplicity.backends.swiftbackend module
duplicity.backends.sxbackend module
duplicity.backends.tahoebackend module
duplicity.backends.webdavbackend module
- class duplicity.backends.webdavbackend.CustomMethodRequest(method, *args, **kwargs)[source]
Bases:
Request
This request subclass allows explicit specification of the HTTP request method. Basic urllib.request.Request class chooses GET or POST depending on self.has_data()
- class duplicity.backends.webdavbackend.VerifiedHTTPSConnection(*args, **kwargs)[source]
Bases:
HTTPSConnection
- class duplicity.backends.webdavbackend.WebDAVBackend(parsed_url)[source]
Bases:
Backend
Backend for accessing a WebDAV repository.
webdav backend contributed in 2006 by Jesper Zedlitz <jesper@zedlitz.de>
- connect(forced=False)[source]
Connect or re-connect to the server, updates self.conn # reconnect on errors as a precaution, there are errors e.g. # “[Errno 32] Broken pipe” or SSl errors that render the connection unusable
- get_authorization(response, path)[source]
Fetches the auth header based on the requested method (basic or digest)
- listbody = '<?xml version="1.0"?><D:propfind xmlns:D="DAV:"><D:prop><D:resourcetype/></D:prop></D:propfind>'
Connect to remote store using WebDAV Protocol
Module contents
Imports of backends should not be done directly in this module. All backend imports are done via import_backends() in backend.py. This file is only to instantiate the duplicity.backends module itself.
Submodules
duplicity.asyncscheduler module
Asynchronous job scheduler, for concurrent execution with minimalistic dependency guarantees.
- class duplicity.asyncscheduler.AsyncScheduler(concurrency)[source]
Bases:
object
Easy-to-use scheduler of function calls to be executed concurrently. A very simple dependency mechanism exists in the form of barriers (see insert_barrier()).
Each instance has a concurrency level associated with it. A concurrency of 0 implies that all tasks will be executed synchronously when scheduled. A concurrency of 1 indicates that a task will be executed asynchronously, but never concurrently with other tasks. Both 0 and 1 guarantee strict ordering among all tasks (i.e., they will be executed in the order scheduled).
At concurrency levels above 1, the tasks will end up being executed in an order undetermined except insofar as is enforced by calls to insert_barrier().
An AsynchScheduler should be created for any independent process; the scheduler will assume that if any background job fails (raises an exception), it makes further work moot.
- __execute_caller(caller)
- __init__(concurrency)[source]
Create an asynchronous scheduler that executes jobs with the given level of concurrency.
- __run_asynchronously(fn, params)
- __run_synchronously(fn, params)
- __start_worker(caller)
Start a new worker.
- insert_barrier()[source]
Proclaim that any tasks scheduled prior to the call to this method MUST be executed prior to any tasks scheduled after the call to this method.
The intended use case is that if task B depends on A, a barrier must be inserted in between to guarantee that A happens before B.
- schedule_task(fn, params)[source]
Schedule the given task (callable, typically function) for execution. Pass the given parameters to the function when calling it. Returns a callable which can optionally be used to wait for the task to complete, either by returning its return value or by propagating any exception raised by said task.
This method may block or return immediately, depending on the configuration and state of the scheduler.
This method may also raise an exception in order to trigger failures early, if the task (if run synchronously) or a previous task has already failed.
NOTE: Pay particular attention to the scope in which this is called. In particular, since it will execute concurrently in the background, assuming fn is a closure, any variables used most be properly bound in the closure. This is the reason for the convenience feature of being able to give parameters to the call, to avoid having to wrap the call itself in a function in order to “fixate” variables in, for example, an enclosing loop.
duplicity.backend module
Provides a common interface to all backends and certain sevices intended to be used by the backends themselves.
- class duplicity.backend.Backend(parsed_url)[source]
Bases:
object
See README in backends directory for information on how to write a backend.
- __subprocess_popen(args)
For internal use. Execute the given command line, interpreted as a shell command. Returns int Exitcode, string StdOut, string StdErr
- get_password()[source]
Return a password for authentication purposes. The password will be obtained from the backend URL, the environment, by asking the user, or by some other method. When applicable, the result will be cached for future invocations.
- munge_password(commandline)[source]
Remove password from commandline by substituting the password found in the URL, if any, with a generic place-holder.
This is intended for display purposes only, and it is not guaranteed that the results are correct (i.e., more than just the ‘:password@’ may be substituted.
- popen_breaks = {}
- subprocess_popen(commandline)[source]
Execute the given command line with error check. Returns int Exitcode, string StdOut, string StdErr
Raise a BackendException on failure.
- use_getpass = True
- class duplicity.backend.BackendWrapper(backend)[source]
Bases:
object
Represents a generic duplicity backend, capable of storing and retrieving files.
- __do_put(source_path, remote_filename)
- close()[source]
Close the backend, releasing any resources held and invalidating any file objects obtained from the backend.
- get_data(filename, parseresults=None)[source]
Retrieve a file from backend, process it, return contents.
- get_fileobj_read(filename, parseresults=None)[source]
Return fileobject opened for reading of filename on backend
The file will be downloaded first into a temp file. When the returned fileobj is closed, the temp file will be deleted.
- pre_process_download(remote_filename)[source]
Manages remote access before downloading files (unseal data in cold storage for instance)
- class duplicity.backend.ParsedUrl(url_string)[source]
Bases:
object
Parse the given URL as a duplicity backend URL.
Returns the data of a parsed URL with the same names as that of the standard urlparse.urlparse() except that all values have been resolved rather than deferred. There are no get_* members. This makes sure that the URL parsing errors are detected early.
Raise InvalidBackendURL on invalid URL’s
- duplicity.backend.get_backend(url_string)[source]
Instantiate a backend suitable for the given URL, or return None if the given string looks like a local path rather than a URL.
Raise InvalidBackendURL if the URL is not a valid URL.
- duplicity.backend.get_backend_object(url_string)[source]
Find the right backend class instance for the given URL, or return None if the given string looks like a local path rather than a URL.
Raise InvalidBackendURL if the URL is not a valid URL.
- duplicity.backend.import_backends()[source]
Import files in the duplicity/backends directory where the filename ends in ‘backend.py’ and ignore the rest.
@rtype: void @return: void
- duplicity.backend.is_backend_url(url_string)[source]
@return Whether the given string looks like a backend URL.
- duplicity.backend.register_backend(scheme, backend_factory)[source]
Register a given backend factory responsible for URL:s with the given scheme.
The backend must be a callable which, when called with a URL as the single parameter, returns an object implementing the backend protocol (i.e., a subclass of Backend).
Typically the callable will be the Backend subclass itself.
This function is not thread-safe and is intended to be called during module importation or start-up.
- duplicity.backend.register_backend_prefix(scheme, backend_factory)[source]
Register a given backend factory responsible for URL:s with the given scheme prefix.
The backend must be a callable which, when called with a URL as the single parameter, returns an object implementing the backend protocol (i.e., a subclass of Backend).
Typically the callable will be the Backend subclass itself.
This function is not thread-safe and is intended to be called during module importation or start-up.
duplicity.cached_ops module
Cache-wrapped functions for grp and pwd lookups.
duplicity.cli_data module
Data for parse command line, check for consistency, and set config
- class duplicity.cli_data.CommandAliases[source]
Bases:
object
commands and aliases
- __init__() None
- backup = ['back', 'bu']
- cleanup = ['clean', 'cl']
- collection_status = ['stat', 'st']
- full = ['fb']
- incremental = ['inc', 'ib']
- list_current_files = ['list', 'ls']
- remove_all_but_n_full = ['rmfull', 'rf']
- remove_all_inc_of_but_n_full = ['rminc', 'ri']
- remove_older_than = ['rmolder', 'ro']
- restore = ['rest', 'rb']
- verify = ['veri', 'vb']
- class duplicity.cli_data.CommandOptions[source]
Bases:
object
legal options by command
- __init__() None
- backup = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--volsize', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--filter-regexp', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--files-from', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--exclude-regexp', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--filter-literal', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--filter-strictcase', '--exclude-if-present', '--gpg-options', '--num-retries', '--s3-kms-grant', '--exclude-filelist', '--mp-segment-size', '--timeout', '--s3-use-glacier', '--ignore-errors', '--exclude', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--include', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--include-regexp', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--filter-ignorecase', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--dry-run', '--include-filelist', '--asynchronous-upload', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--filter-globbing', '--copy-links', '--s3-kms-key-id', '--name', '--s3-use-multiprocessing']
- cleanup = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--gpg-options', '--num-retries', '--s3-kms-grant', '--mp-segment-size', '--timeout', '--name', '--s3-use-glacier', '--ignore-errors', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--copy-links', '--s3-kms-key-id', '--exclude-if-present', '--s3-use-multiprocessing']
- collection_status = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--gpg-options', '--num-retries', '--s3-kms-grant', '--mp-segment-size', '--timeout', '--name', '--s3-use-glacier', '--ignore-errors', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--copy-links', '--s3-kms-key-id', '--exclude-if-present', '--s3-use-multiprocessing']
- full = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--volsize', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--filter-regexp', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--files-from', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--exclude-regexp', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--filter-literal', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--filter-strictcase', '--exclude-if-present', '--gpg-options', '--num-retries', '--s3-kms-grant', '--exclude-filelist', '--mp-segment-size', '--timeout', '--s3-use-glacier', '--ignore-errors', '--exclude', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--include', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--include-regexp', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--filter-ignorecase', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--dry-run', '--include-filelist', '--asynchronous-upload', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--filter-globbing', '--copy-links', '--s3-kms-key-id', '--name', '--s3-use-multiprocessing']
- incremental = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--volsize', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--filter-regexp', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--files-from', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--exclude-regexp', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--filter-literal', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--filter-strictcase', '--exclude-if-present', '--gpg-options', '--num-retries', '--s3-kms-grant', '--exclude-filelist', '--mp-segment-size', '--timeout', '--s3-use-glacier', '--ignore-errors', '--exclude', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--include', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--include-regexp', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--filter-ignorecase', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--dry-run', '--include-filelist', '--asynchronous-upload', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--filter-globbing', '--copy-links', '--s3-kms-key-id', '--name', '--s3-use-multiprocessing']
- list_current_files = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--gpg-options', '--num-retries', '--s3-kms-grant', '--mp-segment-size', '--timeout', '--name', '--s3-use-glacier', '--ignore-errors', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--copy-links', '--s3-kms-key-id', '--exclude-if-present', '--s3-use-multiprocessing']
- remove_all_but_n_full = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--gpg-options', '--num-retries', '--s3-kms-grant', '--mp-segment-size', '--timeout', '--name', '--s3-use-glacier', '--ignore-errors', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--copy-links', '--s3-kms-key-id', '--exclude-if-present', '--s3-use-multiprocessing']
- remove_all_inc_of_but_n_full = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--gpg-options', '--num-retries', '--s3-kms-grant', '--mp-segment-size', '--timeout', '--name', '--s3-use-glacier', '--ignore-errors', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--copy-links', '--s3-kms-key-id', '--exclude-if-present', '--s3-use-multiprocessing']
- remove_older_than = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--gpg-options', '--num-retries', '--s3-kms-grant', '--mp-segment-size', '--timeout', '--name', '--s3-use-glacier', '--ignore-errors', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--copy-links', '--s3-kms-key-id', '--exclude-if-present', '--s3-use-multiprocessing']
- restore = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--gpg-options', '--num-retries', '--s3-kms-grant', '--mp-segment-size', '--timeout', '--name', '--s3-use-glacier', '--ignore-errors', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--copy-links', '--s3-kms-key-id', '--exclude-if-present', '--s3-use-multiprocessing']
- verify = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--gpg-options', '--num-retries', '--s3-kms-grant', '--mp-segment-size', '--timeout', '--name', '--s3-use-glacier', '--ignore-errors', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--copy-links', '--s3-kms-key-id', '--exclude-if-present', '--s3-use-multiprocessing']
- class duplicity.cli_data.DuplicityCommands[source]
Bases:
object
duplicity commands and positional args expected
- NOTE: cli_util must contain a function named check_* for each positional arg,
for example check_source_path() to check for source path validity.
- __init__() None
- backup = ['source_path', 'target_url']
- cleanup = ['target_url']
- collection_status = ['target_url']
- full = ['source_path', 'target_url']
- incremental = ['source_path', 'target_url']
- list_current_files = ['target_url']
- remove_all_but_n_full = ['count', 'target_url']
- remove_all_inc_of_but_n_full = ['count', 'target_url']
- remove_older_than = ['remove_time', 'target_url']
- restore = ['source_url', 'target_dir']
- verify = ['source_url', 'target_dir']
- class duplicity.cli_data.OptionAliases[source]
Bases:
object
- __init__() None
- path_to_restore = ['-r']
- restore_time = ['-t', '--time']
- verbosity = ['-v']
- version = ['-V']
- class duplicity.cli_data.OptionKwargs[source]
Bases:
object
Option kwargs for add_argument
- __init__() None
- allow_source_mismatch = {'action': 'store_true', 'default': False, 'help': 'Allow different source directories'}
- archive_dir = {'default': '/home/docs/.cache/duplicity', 'help': 'Path to store metadata archives', 'metavar': 'path', 'type': <function check_file>}
- asynchronous_upload = {'action': 'store_const', 'const': 1, 'default': 0, 'dest': 'async_concurrency', 'help': 'Number of async upload tasks, max of 1'}
- azure_blob_tier = {'default': None, 'help': 'Standard storage tier used for storing backup files (Hot|Cool|Archive)', 'metavar': 'Hot|Cool|Archive'}
- azure_max_block_size = {'default': None, 'help': 'Number for the block size to upload a blob if the length is unknown\nor is larger than the value set by --azure-max-single-put-size\nThe maximum block size the service supports is 100MiB.', 'metavar': 'number', 'type': <class 'int'>}
- azure_max_connections = {'default': None, 'help': 'Number of maximum parallel connections to use when the blob size exceeds 64MB', 'metavar': 'number', 'type': <class 'int'>}
- azure_max_single_put_size = {'default': None, 'help': 'Largest supported upload size where the Azure library makes only one put call.\nUsed to upload a single block if the content length is known and is less than this', 'metavar': 'number', 'type': <class 'int'>}
- b2_hide_files = {'action': 'store_true', 'default': False, 'help': 'Whether the B2 backend hides files instead of deleting them'}
- backend_retry_delay = {'default': 30, 'help': 'Delay time before next try after a failure of a backend operation', 'metavar': 'seconds', 'type': <class 'int'>}
- cf_backend = {'default': 'pyrax', 'help': 'Allow the user to switch cloudfiles backend', 'metavar': 'pyrax|cloudfiles'}
- compare_data = {'action': 'store_true', 'default': False, 'help': 'Compare data on verify not only signatures'}
- config_dir = {'default': '/home/docs/.cache/duplicity', 'help': 'Path to store configuration files', 'metavar': 'path', 'type': <function check_file>}
- copy_links = {'action': 'store_true', 'default': False, 'help': 'Copy contents of symlinks instead of linking'}
- current_time = {'help': '==SUPPRESS==', 'type': <class 'int'>}
- do_not_restore_ownership = {'action': <class 'duplicity.cli_util.DeprecationAction'>, 'help': '==SUPPRESS==', 'nargs': 0}
- dry_run = {'action': 'store_true', 'default': False, 'help': 'Perform dry-run with no writes'}
- encrypt_key = {'default': None, 'help': 'GNUpg key for encryption/decryption', 'metavar': 'gpg-key-id', 'type': <function set_encrypt_key>}
- encrypt_secret_keyring = {'default': None, 'help': 'Path to secret GNUpg keyring', 'metavar': 'path'}
- exclude = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'Exclude globbing pattern', 'metavar': 'shell_pattern'}
- exclude_device_files = {'action': 'store_true', 'default': False, 'help': 'Exclude device files'}
- exclude_filelist = {'action': <class 'duplicity.cli_util.AddFilelistAction'>, 'default': None, 'help': 'File with list of file patters to exclude', 'metavar': 'filename'}
- exclude_filelist_stdin = {'action': <class 'duplicity.cli_util.DeprecationAction'>, 'help': '==SUPPRESS==', 'nargs': 0}
- exclude_globbing_filelist = {'action': <class 'duplicity.cli_util.DeprecationAction'>, 'help': '==SUPPRESS==', 'nargs': 0}
- exclude_if_present = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'Exclude directory if this file is present', 'metavar': 'filename'}
- exclude_older_than = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'Exclude files older than time', 'metavar': 'time'}
- exclude_other_filesystems = {'action': 'store_true', 'default': False, 'help': 'Exclude other filesystems from backup'}
- exclude_regexp = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'Exclude based on regex pattern', 'metavar': 'regex'}
- fail_on_volume = {'help': '==SUPPRESS==', 'type': <class 'int'>}
- file_changed = {'default': None, 'help': 'Whether to collect only the file status, not the whole root', 'metavar': 'path', 'type': <function check_file>}
- file_prefix = {'default': b'', 'help': 'String prefix for all duplicity files', 'metavar': 'string', 'type': <function make_bytes>}
- file_prefix_archive = {'default': b'', 'help': 'String prefix for duplicity difftar files', 'metavar': 'string', 'type': <function make_bytes>}
- file_prefix_manifest = {'default': b'', 'help': 'String prefix for duplicity manifest files', 'metavar': 'string', 'type': <function make_bytes>}
- file_prefix_signature = {'default': b'', 'help': 'String prefix for duplicity signature files', 'metavar': 'string', 'type': <function make_bytes>}
- file_to_restore = {'action': <class 'duplicity.cli_util.DeprecationAction'>, 'help': '==SUPPRESS==', 'nargs': 0}
- files_from = {'action': <class 'duplicity.cli_util.AddFilelistAction'>, 'default': None, 'help': 'Defines the backup source as a sub-set of the source folder', 'metavar': 'filename', 'type': <function check_file>}
- filter_globbing = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'File selection mode switch, changes the interpretation of any subsequent\n--exclude* or --include* options to shell globbing.', 'nargs': 0}
- filter_ignorecase = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'File selection mode switch, changes the interpretation of any subsequent\n--exclude* or --include* options to case-insensitive matching.', 'nargs': 0}
- filter_literal = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'File selection mode switch, changes the interpretation of any subsequent\n--exclude* or --include* options to literal strings.', 'nargs': 0}
- filter_regexp = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'File selection mode switch, changes the interpretation of any subsequent\n--exclude* or --include* options to regular expressions.', 'nargs': 0}
- filter_strictcase = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'File selection mode switch, changes the interpretation of any subsequent\n--exclude* or --include* options to case-sensitive matching.', 'nargs': 0}
- force = {'action': 'store_true', 'default': None, 'help': 'Force duplicity to actually delete during cleanup'}
- ftp_passive = {'action': 'store_const', 'const': 'passive', 'default': 'passive', 'dest': 'ftp_connection', 'help': 'Tell FTP to use passive mode'}
- ftp_regular = {'action': 'store_const', 'const': 'regular', 'default': 'passive', 'dest': 'ftp_connection', 'help': 'Tell FTP to use regular mode'}
- full_if_older_than = {'default': None, 'help': "Perform full backup if last full is older than 'time'", 'metavar': 'time', 'type': <function check_time>}
- gio = {'action': <class 'duplicity.cli_util.DeprecationAction'>, 'help': '==SUPPRESS==', 'nargs': 0}
- gpg_binary = {'default': None, 'help': 'Path to GNUpg executable file', 'metavar': 'path', 'type': <function check_file>}
- gpg_options = {'action': 'append', 'default': None, 'help': 'Options to append to GNUpg invocation', 'metavar': 'options'}
- idr_fakeroot = {'default': None, 'help': 'Fake root for idrive backend', 'metavar': 'path', 'type': <function check_file>}
- ignore_errors = {'action': 'store_true', 'default': False, 'help': 'Ignore most errors during processing'}
- imap_full_address = {'action': 'store_true', 'default': False, 'help': 'Whether to use the full email address as the user name'}
- imap_mailbox = {'default': 'INBOX', 'help': 'Name of the imap folder to store backups', 'metavar': 'imap_mailbox'}
- include = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'Include globbing pattern', 'metavar': 'shell_pattern'}
- include_filelist = {'action': <class 'duplicity.cli_util.AddFilelistAction'>, 'default': None, 'help': 'File with list of file patters to include', 'metavar': 'filename'}
- include_filelist_stdin = {'action': <class 'duplicity.cli_util.DeprecationAction'>, 'help': '==SUPPRESS==', 'nargs': 0}
- include_globbing_filelist = {'action': <class 'duplicity.cli_util.DeprecationAction'>, 'help': '==SUPPRESS==', 'nargs': 0}
- include_regexp = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'Include based on regex pattern', 'metavar': 'regex'}
- log_fd = {'default': None, 'help': 'Logging file descripto to use', 'metavar': 'file_descriptor', 'type': <function set_log_fd>}
- log_file = {'default': None, 'help': 'Logging filename to use', 'metavar': 'log_filename', 'type': <function set_log_file>}
- log_timestamp = {'action': 'store_true', 'default': False, 'help': 'Whether to include timestamp and level in log'}
- max_blocksize = {'default': 2048, 'help': 'Maximum block size for large files in MB', 'metavar': 'number', 'type': <class 'int'>}
- metadata_sync_mode = {'choices': ('full', 'partial'), 'default': 'partial', 'help': 'Only sync required metadata not all'}
- mf_purge = {'action': 'store_true', 'default': False, 'help': 'Option for mediafire to purge files on delete instead of sending to trash'}
- mp_segment_size = {'default': 230686720, 'help': 'Swift backend segment size', 'metavar': 'number', 'type': <function set_megs>}
- name = {'default': None, 'dest': 'backup_name', 'help': 'Custom backup name instead of hash', 'metavar': 'backup name'}
- no_compression = {'action': 'store_false', 'default': True, 'dest': 'compression', 'help': 'If supplied do not perform compression'}
- no_encryption = {'action': 'store_false', 'default': True, 'dest': 'encryption', 'help': 'If supplied do not perform encryption'}
- no_files_changed = {'action': 'store_false', 'default': True, 'dest': 'files_changed', 'help': 'If supplied do not collect the files_changed list'}
- no_print_statistics = {'action': 'store_false', 'default': True, 'dest': 'print_statistics', 'help': 'If supplied do not print statistics'}
- no_restore_ownership = {'action': 'store_false', 'default': True, 'dest': 'restore_ownership', 'help': 'If supplied do not restore uid/gid when finished'}
- null_separator = {'action': 'store_true', 'default': None, 'help': 'Whether to split on null instead of newline'}
- num_retries = {'default': 5, 'help': 'Number of retries on network operations', 'metavar': 'number', 'type': <class 'int'>}
- numeric_owner = {'action': 'store_true', 'default': False, 'help': 'Keeps number from tar file. Like same option in GNU tar.'}
- old_filenames = {'action': <class 'duplicity.cli_util.DeprecationAction'>, 'help': '==SUPPRESS==', 'nargs': 0}
- par2_options = {'action': 'append', 'default': '', 'help': 'Verbatim par2 options. May be supplied multiple times.', 'metavar': 'options'}
- par2_redundancy = {'default': 10, 'help': 'Level of Redundancy in percent for Par2 files', 'metavar': 'number', 'type': <class 'int'>}
- par2_volumes = {'default': 1, 'help': 'Number of par2 volumes', 'metavar': 'number', 'type': <class 'int'>}
- path_to_restore = {'default': None, 'dest': 'restore_path', 'help': 'File or directory path to restore', 'metavar': 'path', 'type': <function check_file>}
- progress = {'action': 'store_true', 'default': False, 'help': 'Display progress for the full and incremental backup operations'}
- progress_rate = {'default': 3, 'help': 'Used to control the progress option update rate in seconds', 'metavar': 'number', 'type': <class 'int'>}
- pydevd = {'action': 'store_true', 'help': '==SUPPRESS=='}
- rename = {'action': <class 'duplicity.cli_util.AddRenameAction'>, 'default': None, 'help': 'Rename files during restore', 'metavar': 'from to', 'nargs': 2}
- restore_time = {'default': None, 'help': 'Restores will try to bring back the state as of the following time', 'metavar': 'time', 'type': <function check_time>}
- rsync_options = {'action': 'append', 'default': '', 'help': 'User added rsync options', 'metavar': 'options'}
- s3_endpoint_url = {'action': 'store', 'default': None, 'help': 'Specity S3 endpoint', 'metavar': 's3_endpoint_url'}
- s3_european_buckets = {'action': 'store_true', 'default': False, 'help': 'Whether to create European buckets'}
- s3_kms_grant = {'action': 'store', 'default': None, 'help': 'S3 KMS grant value', 'metavar': 's3_kms_grant'}
- s3_kms_key_id = {'action': 'store', 'default': None, 'help': 'S3 KMS encryption key id', 'metavar': 's3_kms_key_id'}
- s3_multipart_chunk_size = {'default': 20, 'help': 'Chunk size used for S3 multipart uploads.The number of parallel uploads to\nS3 be given by chunk size / volume size. Use this to maximize the use of\nyour bandwidth', 'metavar': 'number', 'type': <function set_megs>}
- s3_multipart_max_procs = {'default': 4, 'help': 'Number of processes to set the Processor Pool to when uploading multipart\nuploads to S3. Use this to control the maximum simultaneous uploads to S3', 'metavar': 'number', 'type': <class 'int'>}
- s3_multipart_max_timeout = {'default': None, 'help': 'Number of seconds to wait for each part of a multipart upload to S3. Use this\nto prevent hangups when doing a multipart upload to S3', 'metavar': 'number', 'type': <class 'int'>}
- s3_region_name = {'action': 'store', 'default': None, 'help': 'Specity S3 region name', 'metavar': 's3_region_name'}
- s3_unencrypted_connection = {'action': 'store_true', 'default': False, 'help': 'Whether to use plain HTTP (without SSL) to send data to S3'}
- s3_use_deep_archive = {'action': 'store_true', 'default': False, 'help': 'Whether to use S3 Glacier Deep Archive Storage'}
- s3_use_glacier = {'action': 'store_true', 'default': False, 'help': 'Whether to use S3 Glacier Storage'}
- s3_use_glacier_ir = {'action': 'store_true', 'default': False, 'help': 'Whether to use S3 Glacier IR Storage'}
- s3_use_ia = {'action': 'store_true', 'default': False, 'help': 'Whether to use S3 Infrequent Access Storage'}
- s3_use_multiprocessing = {'action': 'store_true', 'default': False, 'help': 'Option to allow the s3/boto backend use the multiprocessing version'}
- s3_use_new_style = {'action': 'store_true', 'default': False, 'help': 'Whether to use new-style subdomain addressing for S3 buckets. Such\nuse is not backwards-compatible with upper-case buckets, or buckets\nthat are otherwise not expressable in a valid hostname'}
- s3_use_onezone_ia = {'action': 'store_true', 'default': False, 'help': 'Whether to use S3 One Zone Infrequent Access Storage'}
- s3_use_rrs = {'action': 'store_true', 'default': False, 'help': 'Whether to use S3 Reduced Redundancy Storage'}
- s3_use_server_side_encryption = {'action': 'store_true', 'default': False, 'dest': 's3_use_sse', 'help': 'Option to allow use of server side encryption in s3'}
- s3_use_server_side_kms_encryption = {'action': 'store_true', 'default': False, 'dest': 's3_use_sse_kms', 'help': 'Allow use of server side KMS encryption'}
- scp_command = {'default': None, 'help': 'SCP command to use (ssh pexpect backend)', 'metavar': 'command'}
- sftp_command = {'default': None, 'help': 'SFTP command to use (ssh pexpect backend)', 'metavar': 'command'}
- short_filenames = {'action': <class 'duplicity.cli_util.DeprecationAction'>, 'help': '==SUPPRESS==', 'nargs': 0}
- show_changes_in_set = {'default': None, 'help': 'Show file changes (new, deleted, changed) in the specified backup\nset (0 specifies latest, 1 specifies next latest, etc.)', 'metavar': 'number', 'type': <class 'int'>}
- sign_key = {'default': None, 'help': 'Sign key for encryption/decryption', 'metavar': 'gpg-key-id', 'type': <function set_sign_key>}
- skip_volume = {'help': '==SUPPRESS==', 'type': <class 'int'>}
- ssh_askpass = {'action': 'store_true', 'default': False, 'help': 'Ask the user for the SSH password. Not for batch usage'}
- ssh_options = {'action': 'append', 'default': '', 'help': 'SSH options to add', 'metavar': 'options'}
- ssl_cacert_file = {'default': None, 'help': 'pem formatted bundle of certificate authorities', 'metavar': 'file'}
- ssl_cacert_path = {'default': None, 'help': 'path to a folder with certificate authority files', 'metavar': 'path'}
- ssl_no_check_certificate = {'action': 'store_true', 'default': False, 'help': 'Set to not validate SSL certificates'}
- swift_storage_policy = {'default': '', 'help': 'Option to specify a Swift container storage policy.', 'metavar': 'policy'}
- tempdir = {'default': None, 'dest': 'temproot', 'help': 'Working directory for temp files', 'metavar': 'path', 'type': <function check_file>}
- time_separator = {'default': ':', 'help': "Character used like the ':' in time strings like\n2002-08-06T04:22:00-07:00", 'metavar': 'char'}
- timeout = {'default': 30, 'help': 'Network timeout in seconds', 'metavar': 'seconds', 'type': <class 'int'>}
- use_agent = {'action': 'store_true', 'default': False, 'help': 'Whether to specify --use-agent in GnuPG options'}
- verbosity = {'default': 3, 'help': 'Logging verbosity', 'metavar': '[0-9]', 'type': <function check_verbosity>}
- version = {'action': 'version', 'help': 'Display version and exit', 'version': '%(prog)s $version'}
- volsize = {'default': 200, 'help': 'Volume size to use in MiB', 'metavar': 'number', 'type': <function set_megs>}
- webdav_headers = {'default': '', 'help': "extra headers for Webdav, like 'Cookie,name: value'", 'metavar': 'string'}
duplicity.cli_main module
Main for parse command line, check for consistency, and set config
- class duplicity.cli_main.DuplicityHelpFormatter(prog, indent_increment=2, max_help_position=24, width=None)[source]
Bases:
ArgumentDefaultsHelpFormatter
,RawDescriptionHelpFormatter
A working class to combine ArgumentDefaults, RawDescription. Use with make_wide() to insure we catch argparse API changes.
- duplicity.cli_main.make_wide(formatter, w=120, h=46)[source]
Return a wider HelpFormatter, if possible. See: https://stackoverflow.com/a/5464440 Beware: “Only the name of this class is considered a public API.”
duplicity.cli_util module
Utils for parse command line, check for consistency, and set config
- class duplicity.cli_util.AddFilelistAction(option_strings, dest, **kwargs)[source]
Bases:
DuplicityAction
- class duplicity.cli_util.AddRenameAction(option_strings, dest, **kwargs)[source]
Bases:
DuplicityAction
- class duplicity.cli_util.AddSelectionAction(option_strings, dest, **kwargs)[source]
Bases:
DuplicityAction
- class duplicity.cli_util.DeprecationAction(option_strings, dest, **kwargs)[source]
Bases:
DuplicityAction
- duplicity.cli_util.expand_archive_dir(archdir, backname)[source]
Return expanded version of archdir joined with backname.
- duplicity.cli_util.generate_default_backup_name(backend_url)[source]
@param backend_url: URL to backend. @returns A default backup name (string).
- duplicity.cli_util.set_encrypt_key(encrypt_key)[source]
Set config.gpg_profile.encrypt_key assuming proper key given
Set config.gpg_profile.hidden_encrypt_key assuming proper key given
- duplicity.cli_util.set_selection()[source]
Return selection iter starting at filename with arguments applied
duplicity.config module
Store global configuration information
duplicity.diffdir module
Functions for producing signatures and deltas of directories
Note that the main processes of this module have two parts. In the first, the signature or delta is constructed of a ROPath iterator. In the second, the ROPath iterator is put into tar block form.
- class duplicity.diffdir.DeltaTarBlockIter(input_iter)[source]
Bases:
TarBlockIter
TarBlockIter that yields parts of a deltatar file
Unlike SigTarBlockIter, the argument to __init__ is a delta_path_iter, so the delta information has already been calculated.
- duplicity.diffdir.DirDelta(path_iter, dirsig_fileobj_list)[source]
Produce tarblock diff given dirsig_fileobj_list and pathiter
dirsig_fileobj_list should either be a tar fileobj or a list of those, sorted so the most recent is last.
- duplicity.diffdir.DirDelta_WriteSig(path_iter, sig_infp_list, newsig_outfp)[source]
Like DirDelta but also write signature into sig_fileobj
Like DirDelta, sig_infp_list can be a tar fileobj or a sorted list of those. A signature will only be written to newsig_outfp if it is different from (the combined) sig_infp_list.
- duplicity.diffdir.DirFull(path_iter)[source]
Return a tarblock full backup of items in path_iter
A full backup is just a diff starting from nothing (it may be less elegant than using a standard tar file, but we can be sure that it will be easy to split up the tar and make the volumes the same sizes).
- duplicity.diffdir.DirFull_WriteSig(path_iter, sig_outfp)[source]
Return full backup like above, but also write signature to sig_outfp
- class duplicity.diffdir.DummyBlockIter(input_iter)[source]
Bases:
TarBlockIter
TarBlockIter that does no file reading
- class duplicity.diffdir.FileWithReadCounter(infile)[source]
Bases:
object
File-like object which also computes amount read as it is read
- class duplicity.diffdir.FileWithSignature(infile, callback, filelen, *extra_args)[source]
Bases:
object
File-like object which also computes signature as it is read
- __init__(infile, callback, filelen, *extra_args)[source]
FileTee initializer
The object will act like infile, but whenever it is read it add infile’s data to a SigGenerator object. When the file has been read to the end the callback will be called with the calculated signature, and any extra_args if given.
filelen is used to calculate the block size of the signature.
- blocksize = 32768
- class duplicity.diffdir.SigTarBlockIter(input_iter)[source]
Bases:
TarBlockIter
TarBlockIter that yields blocks of a signature tar from path_iter
- class duplicity.diffdir.TarBlock(index, data)[source]
Bases:
object
Contain information to add next file to tar
- class duplicity.diffdir.TarBlockIter(input_iter)[source]
Bases:
object
A bit like an iterator, yield tar blocks given input iterator
Unlike an iterator, however, control over the maximum size of a tarblock is available by passing an argument to next(). Also the get_footer() is available.
Return closing string for tarfile, reset offset
- process_continued()[source]
Get more tarblocks
If processing val above would produce more than one TarBlock, get the rest of them by calling process_continue.
- duplicity.diffdir.collate2iters(riter1, riter2)[source]
Collate two iterators.
The elements yielded by each iterator must be have an index variable, and this function returns pairs (elem1, elem2), (elem1, None), or (None, elem2) two elements in a pair will have the same index, and earlier indicies are yielded later than later indicies.
- duplicity.diffdir.combine_path_iters(path_iter_list)[source]
Produce new iterator by combining the iterators in path_iter_list
This new iter will iterate every path that is in path_iter_list in order of increasing index. If multiple iterators in path_iter_list yield paths with the same index, combine_path_iters will discard all paths but the one yielded by the last path_iter.
This is used to combine signature iters, as the output will be a full up-to-date signature iter.
- duplicity.diffdir.delta_iter_error_handler(exc, new_path, sig_path, sig_tar=None)[source]
Called by get_delta_iter, report error in getting delta
- duplicity.diffdir.get_block_size(file_len)[source]
Return a reasonable block size to use on files of length file_len
If the block size is too big, deltas will be bigger than is necessary. If the block size is too small, making deltas and patching can take a really long time.
- duplicity.diffdir.get_combined_path_iter(sig_infp_list)[source]
Return path iter combining signatures in list of open sig files
- duplicity.diffdir.get_delta_iter(new_iter, sig_iter, sig_fileobj=None)[source]
Generate delta iter from new Path iter and sig Path iter.
For each delta path of regular file type, path.difftype with be set to “snapshot”, “diff”. sig_iter will probably iterate ROPaths instead of Paths.
If sig_fileobj is not None, will also write signatures to sig_fileobj.
- duplicity.diffdir.get_delta_path(new_path, sig_path, sigTarFile=None)[source]
Return new delta_path which, when read, writes sig to sig_fileobj, if sigTarFile is not None
- duplicity.diffdir.log_delta_path(delta_path, new_path=None, stats=None)[source]
Look at delta path and log delta. Add stats if new_path is set
duplicity.dup_collections module
Classes and functions on collections of backup volumes
- class duplicity.dup_collections.BackupChain(backend)[source]
Bases:
object
BackupChain - a number of linked BackupSets
A BackupChain always starts with a full backup set and continues with incremental ones.
- class duplicity.dup_collections.BackupSet(backend, action)[source]
Bases:
object
Backup set - the backup information produced by one session
- add_filename(filename, pr=None)[source]
Add a filename to given set. Return true if it fits.
The filename will match the given set if it has the right times and is of the right type. The information will be set from the first filename given.
@param filename: name of file to add @type filename: string
@param pr: pre-computed result of file_naming.parse(filename) @type pr: Optional[ParseResults]
- class duplicity.dup_collections.CollectionsStatus(backend, archive_dir_path, action)[source]
Bases:
object
Hold information about available chains and sets
- get_backup_chain_at_time(time)[source]
Return backup chain covering specified time
Tries to find the backup chain covering the given time. If there is none, return the earliest chain before, and failing that, the earliest chain.
- get_backup_chains(filename_list)[source]
Split given filename_list into chains
Return value will be tuple (list of chains, list of sets, list of incomplete sets), where the list of sets will comprise sets not fitting into any chain, and the incomplete sets are sets missing files.
- get_chains_older_than(t)[source]
Returns a list of backup chains older than the given time t
All of the times will be associated with an intact chain. Furthermore, none of the times will be of a chain which a newer set may depend on. For instance, if set A is a full set older than t, and set B is an incremental based on A which is newer than t, then the time of set A will not be returned.
- get_extraneous()[source]
Return list of the names of extraneous duplicity files
A duplicity file is considered extraneous if it is recognizable as a duplicity file, but isn’t part of some complete backup set, or current signature chain.
- get_last_backup_chain()[source]
Return the last full backup of the collection, or None if there is no full backup chain.
- get_last_full_backup_time()[source]
Return the time of the last full backup, or 0 if there is none.
- get_nth_last_backup_chain(n)[source]
Return the nth-to-last full backup of the collection, or None if there is less than n backup chains.
NOTE: n = 1 -> time of latest available chain (n = 0 is not a valid input). Thus the second-to-last is obtained with n=2 rather than n=1.
- get_nth_last_full_backup_time(n)[source]
Return the time of the nth to last full backup, or 0 if there is none.
- get_older_than(t)[source]
Returns a list of backup sets older than the given time t
All of the times will be associated with an intact chain. Furthermore, none of the times will be of a set which a newer set may depend on. For instance, if set A is a full set older than t, and set B is an incremental based on A which is newer than t, then the time of set A will not be returned.
- get_older_than_required(t)[source]
Returns list of old backup sets required by new sets
This function is similar to the previous one, but it only returns the times of sets which are old but part of the chains where the newer end of the chain is newer than t.
- get_signature_chain_at_time(time)[source]
Return signature chain covering specified time
Tries to find the signature chain covering the given time. If there is none, return the earliest chain before, and failing that, the earliest chain.
- get_signature_chains(local, filelist=None)[source]
Find chains in archive_dir_path (if local is true) or backend
Use filelist if given, otherwise regenerate. Return value is pair (list of chains, list of signature paths not in any chains).
- get_signature_chains_older_than(t)[source]
Returns a list of signature chains older than the given time t
All of the times will be associated with an intact chain. Furthermore, none of the times will be of a chain which a newer set may depend on. For instance, if set A is a full set older than t, and set B is an incremental based on A which is newer than t, then the time of set A will not be returned.
- set_matched_chain_pair(sig_chains, backup_chains)[source]
Set self.matched_chain_pair and self.other_sig/backup_chains
The latest matched_chain_pair will be set. If there are both remote and local signature chains capable of matching the latest backup chain, use the local sig chain (it does not need to be downloaded).
- class duplicity.dup_collections.SignatureChain(local, location)[source]
Bases:
object
A number of linked SignatureSets
Analog to BackupChain - start with a full-sig, and continue with new-sigs.
- __init__(local, location)[source]
Return new SignatureChain.
local should be true iff the signature chain resides in config.archive_dir_path and false if the chain is in config.backend.
@param local: True if sig chain in config.archive_dir_path @type local: Boolean
@param location: Where the sig chain is located @type location: config.archive_dir_path or config.backend
duplicity.dup_main module
- class duplicity.dup_main.Restart(last_backup)[source]
Bases:
object
Class to aid in restart of inc or full backup. Instance in config.restart if restart in progress.
- duplicity.dup_main.check_last_manifest(col_stats)[source]
Check consistency and hostname/directory of last manifest
@type col_stats: CollectionStatus object @param col_stats: collection status
@rtype: void @return: void
- duplicity.dup_main.check_resources(action)[source]
Check for sufficient resources: - temp space for volume build - enough max open files Put out fatal error if not sufficient to run
@type action: string @param action: action in progress
@rtype: void @return: void
- duplicity.dup_main.check_sig_chain(col_stats)[source]
Get last signature chain for inc backup, or None if none available
@type col_stats: CollectionStatus object @param col_stats: collection status
- duplicity.dup_main.cleanup(col_stats)[source]
Delete the extraneous files in the current backend
@type col_stats: CollectionStatus object @param col_stats: collection status
@rtype: void @return: void
- duplicity.dup_main.dummy_backup(tarblock_iter)[source]
Fake writing to backend, but do go through all the source paths.
@type tarblock_iter: tarblock_iter @param tarblock_iter: iterator for current tar block
@rtype: int @return: constant 0 (zero)
- duplicity.dup_main.full_backup(col_stats)[source]
Do full backup of directory to backend, using archive_dir_path
@type col_stats: CollectionStatus object @param col_stats: collection status
@rtype: void @return: void
- duplicity.dup_main.get_man_fileobj(backup_type)[source]
Return a fileobj opened for writing, save results as manifest
Save manifest in config.archive_dir_path gzipped. Save them on the backend encrypted as needed.
@type man_type: string @param man_type: either “full” or “new”
@rtype: fileobj @return: fileobj opened for writing
- duplicity.dup_main.get_passphrase(n, action, for_signing=False)[source]
Check to make sure passphrase is indeed needed, then get the passphrase from environment, from gpg-agent, or user
If n=3, a password is requested and verified. If n=2, the current password is verified. If n=1, a password is requested without verification for the time being.
@type n: int @param n: verification level for a passphrase being requested @type action: string @param action: action to perform @type for_signing: boolean @param for_signing: true if the passphrase is for a signing key, false if not @rtype: string @return: passphrase
- duplicity.dup_main.get_sig_fileobj(sig_type)[source]
Return a fileobj opened for writing, save results as signature
Save signatures in config.archive_dir gzipped. Save them on the backend encrypted as needed.
@type sig_type: string @param sig_type: either “full-sig” or “new-sig”
@rtype: fileobj @return: fileobj opened for writing
- duplicity.dup_main.incremental_backup(sig_chain)[source]
Do incremental backup of directory to backend, using archive_dir_path
@rtype: void @return: void
- duplicity.dup_main.list_current(col_stats)[source]
List the files current in the archive (examining signature only)
@type col_stats: CollectionStatus object @param col_stats: collection status
@rtype: void @return: void
- duplicity.dup_main.log_startup_parms(verbosity=5)[source]
log Python, duplicity, and system versions
- duplicity.dup_main.print_statistics(stats, bytes_written)[source]
If config.print_statistics, print stats after adding bytes_written
@rtype: void @return: void
- duplicity.dup_main.remove_all_but_n_full(col_stats)[source]
Remove backup files older than the last n full backups.
@type col_stats: CollectionStatus object @param col_stats: collection status
@rtype: void @return: void
- duplicity.dup_main.remove_old(col_stats)[source]
Remove backup files older than config.remove_time from backend
@type col_stats: CollectionStatus object @param col_stats: collection status
@rtype: void @return: void
- duplicity.dup_main.restart_position_iterator(tarblock_iter)[source]
Fake writing to backend, but do go through all the source paths. Stop when we have processed the last file and block from the last backup. Normal backup will proceed at the start of the next volume in the set.
@type tarblock_iter: tarblock_iter @param tarblock_iter: iterator for current tar block
@rtype: int @return: constant 0 (zero)
- duplicity.dup_main.restore(col_stats)[source]
Restore archive in config.backend to config.local_path
@type col_stats: CollectionStatus object @param col_stats: collection status
@rtype: void @return: void
- duplicity.dup_main.restore_add_sig_check(fileobj)[source]
Require signature when closing fileobj matches sig in gpg_profile
@rtype: void @return: void
- duplicity.dup_main.restore_check_hash(volume_info, vol_path)[source]
Check the hash of vol_path path against data in volume_info
@rtype: boolean @return: true (verified) / false (failed)
- duplicity.dup_main.restore_get_enc_fileobj(backend, filename, volume_info)[source]
Return plaintext fileobj from encrypted filename on backend
If volume_info is set, the hash of the file will be checked, assuming some hash is available. Also, if config.sign_key is set, a fatal error will be raised if file not signed by sign_key.
with –ignore-errors set continue on hash mismatch
- duplicity.dup_main.restore_get_patched_rop_iter(col_stats)[source]
Return iterator of patched ROPaths of desired restore data
@type col_stats: CollectionStatus object @param col_stats: collection status
- duplicity.dup_main.sync_archive(col_stats)[source]
Synchronize local archive manifest file and sig chains to remote archives. Copy missing files from remote to local as needed to make sure the local archive is synchronized to remote storage.
@rtype: void @return: void
- duplicity.dup_main.verify(col_stats)[source]
Verify files, logging differences
@type col_stats: CollectionStatus object @param col_stats: collection status
@rtype: void @return: void
- duplicity.dup_main.write_multivol(backup_type, tarblock_iter, man_outfp, sig_outfp, backend)[source]
Encrypt volumes of tarblock_iter and write to backend
backup_type should be “inc” or “full” and only matters here when picking the filenames. The path_prefix will determine the names of the files written to backend. Also writes manifest file. Returns number of bytes written.
@type backup_type: string @param backup_type: type of backup to perform, either ‘inc’ or ‘full’ @type tarblock_iter: tarblock_iter @param tarblock_iter: iterator for current tar block @type backend: callable backend object @param backend: I/O backend for selected protocol
@rtype: int @return: bytes written
duplicity.dup_temp module
Manage temporary files
- class duplicity.dup_temp.FileobjHooked(fileobj, tdp=None, dirpath=None, partname=None, permname=None, remname=None)[source]
Bases:
object
Simulate a file, but add hook on close
- __init__(fileobj, tdp=None, dirpath=None, partname=None, permname=None, remname=None)[source]
Initializer. fileobj is the file object to simulate
- property name
Return the name of the file
- class duplicity.dup_temp.SrcIter(src)[source]
Bases:
object
Iterate over source and return Block of data.
- class duplicity.dup_temp.TempDupPath(base, index=(), parseresults=None)[source]
Bases:
DupPath
Like TempPath, but build around DupPath
- class duplicity.dup_temp.TempPath(base, index=())[source]
Bases:
Path
Path object used as a temporary file
- duplicity.dup_temp.get_fileobj_duppath(dirpath, partname, permname, remname, overwrite=False)[source]
Return a file object open for writing, will write to filename
Data will be processed and written to a temporary file. When the return fileobject is closed, rename to final position. filename must be a recognizable duplicity data file.
duplicity.dup_threading module
Duplicity specific but otherwise generic threading interfaces and utilities.
(Not called “threading” because we do not want to conflict with the standard threading module.)
- class duplicity.dup_threading.Value(value=None)[source]
Bases:
object
A thread-safe container of a reference to an object (but not the object itself).
In particular this means it is safe to:
value.set(1)
But unsafe to:
value.get()[‘key’] = value
Where the latter must be done using something like:
- def _setprop():
value.get()[‘key’] = value
with_lock(value, _setprop)
Operations such as increments are best done as:
value.transform(lambda val: val + 1)
- acquire()[source]
Acquire this Value for mutually exclusive access. Only ever needed when calling code must perform operations that cannot be done with get(), set() or transform().
- transform(fn)[source]
Call fn with the current value as the parameter, and reset the value to the return value of fn.
During the execution of fn, all other access to this Value is prevented.
If fn raised an exception, the value is not reset.
Returns the value returned by fn, or raises the exception raised by fn.
- duplicity.dup_threading.async_split(fn)[source]
Splits the act of calling the given function into one front-end part for waiting on the result, and a back-end part for performing the work in another thread.
Returns (waiter, caller) where waiter is a function to be called in order to wait for the results of an asynchronous invokation of fn to complete, returning fn’s result or propagating it’s exception.
Caller is the function to call in a background thread in order to execute fn asynchronously. Caller will return (success, waiter) where success is a boolean indicating whether the function suceeded (did NOT raise an exception), and waiter is the waiter that was originally returned by the call to async_split().
- duplicity.dup_threading.interruptably_wait(cv, waitFor)[source]
cv - The threading.Condition instance to wait on test - Callable returning a boolean to indicate whether
the criteria being waited on has been satisfied.
Perform a wait on a condition such that it is keyboard interruptable when done in the main thread. Due to Python limitations as of <= 2.5, lock acquisition and conditions waits are not interruptable when performed in the main thread.
Currently, this comes at a cost additional CPU use, compared to a normal wait. Future implementations may be more efficient if the underlying python supports it.
The condition must be acquired.
This function should only be used on conditions that are never expected to be acquired for extended periods of time, or the lock-acquire of the underlying condition could cause an uninterruptable state despite the efforts of this function.
There is no equivalent for acquireing a lock, as that cannot be done efficiently.
Example:
Instead of:
cv.acquire() while not thing_done:
cv.wait(someTimeout)
cv.release()
do:
cv.acquire() interruptable_condwait(cv, lambda: thing_done) cv.release()
- duplicity.dup_threading.require_threading(reason=None)[source]
Assert that threading is required for operation to continue. Raise an appropriate exception if this is not the case.
Reason specifies an optional reason why threading is required, which will be used for error reporting in case threading is not supported.
- duplicity.dup_threading.thread_module()[source]
Returns the thread module, or dummy_thread if threading is not supported.
- duplicity.dup_threading.threading_module()[source]
Returns the threading module, or dummy_thread if threading is not supported.
duplicity.dup_time module
Provide time related exceptions and functions
- duplicity.dup_time.genstrtotime(timestr, override_curtime=None)[source]
Convert a generic time string to a time in seconds
- duplicity.dup_time.gettzd(dstflag)[source]
Return w3’s timezone identification string.
Expresed as [+/-]hh:mm. For instance, PST is -08:00. Zone is coincides with what localtime(), etc., use.
- duplicity.dup_time.intstringtoseconds(interval_string)[source]
Convert a string expressing an interval (e.g. “4D2s”) to seconds
- duplicity.dup_time.inttopretty(seconds)[source]
Convert num of seconds to readable string like “2 hours”.
- duplicity.dup_time.setcurtime(time_in_secs=None)[source]
Sets the current time in curtime and curtimestr
- duplicity.dup_time.setprevtime(time_in_secs)[source]
Sets the previous time in prevtime and prevtimestr
- duplicity.dup_time.stringtopretty(timestring)[source]
Return pretty version of time given w3 time string
- duplicity.dup_time.stringtotime(timestring)[source]
Return time in seconds from w3 or duplicity timestring
If there is an error parsing the string, or it doesn’t look like a valid datetime string, return None.
duplicity.errors module
Error/exception classes that do not fit naturally anywhere else.
- exception duplicity.errors.BackendException(msg, code=50)[source]
Bases:
DuplicityError
Raised to indicate a backend specific problem.
- exception duplicity.errors.BadVolumeException[source]
Bases:
DuplicityError
- exception duplicity.errors.ConflictingScheme[source]
Bases:
DuplicityError
Raised to indicate an attempt was made to register a backend for a scheme for which there is already a backend registered.
- exception duplicity.errors.FatalBackendException(msg, code=50)[source]
Bases:
BackendException
Raised to indicate a backend failed fatally.
- exception duplicity.errors.InvalidBackendURL[source]
Bases:
UserError
Raised to indicate a URL was not a valid backend URL.
- exception duplicity.errors.NotSupported[source]
Bases:
DuplicityError
Exception raised when an action cannot be completed because some particular feature is not supported by the environment.
- exception duplicity.errors.TemporaryLoadException(msg, code=50)[source]
Bases:
BackendException
Raised to indicate a temporary issue on the backend. Duplicity should back off for a bit and try again.
- exception duplicity.errors.UnsupportedBackendScheme(url)[source]
Bases:
InvalidBackendURL
,UserError
Raised to indicate that a backend URL was parsed successfully as a URL, but was not supported.
- exception duplicity.errors.UserError[source]
Bases:
DuplicityError
Subclasses use this in their inheritance hierarchy to signal that the error is a user generated one, and that it is therefore typically unsuitable to display a full stack trace.
duplicity.file_naming module
Produce and parse the names of duplicity’s backup files
- class duplicity.file_naming.ParseResults(type, manifest=None, volume_number=None, time=None, start_time=None, end_time=None, encrypted=None, compressed=None, partial=False)[source]
Bases:
object
Hold information taken from a duplicity filename
- duplicity.file_naming.get(type, volume_number=None, manifest=False, encrypted=False, gzipped=False, partial=False)[source]
Return duplicity filename of specified type
type can be “full”, “inc”, “full-sig”, or “new-sig”. volume_number can be given with the full and inc types. If manifest is true the filename is of a full or inc manifest file.
- duplicity.file_naming.get_suffix(encrypted, gzipped)[source]
Return appropriate suffix depending on status of encryption or compression or neither.
duplicity.filechunkio module
- class duplicity.filechunkio.FileChunkIO(name, mode='r', closefd=True, offset=0, bytes=None, *args, **kwargs)[source]
Bases:
FileIO
A class that allows you reading only a chunk of a file.
- __init__(name, mode='r', closefd=True, offset=0, bytes=None, *args, **kwargs)[source]
Open a file chunk. The mode can only be ‘r’ for reading. Offset is the amount of bytes that the chunks starts after the real file’s first byte. Bytes defines the amount of bytes the chunk has, which you can set to None to include the last byte of the real file.
duplicity.globmatch module
- exception duplicity.globmatch.FilePrefixError[source]
Bases:
GlobbingError
Signals that a specified file doesn’t start with correct prefix
- exception duplicity.globmatch.GlobbingError[source]
Bases:
Exception
Something has gone wrong when parsing a glob string
- duplicity.globmatch._glob_get_prefix_regexs(glob_str)[source]
Return list of regexps equivalent to prefixes of glob_str
- duplicity.globmatch.glob_to_regex(pat)[source]
Returned regular expression equivalent to shell glob pat
Currently only the ?, , [], and * expressions are supported. Ranges like [a-z] are currently unsupported. There is no way to quote these special characters.
This function taken with minor modifications from efnmatch.py by Donovan Baarda.
- duplicity.globmatch.select_fn_from_glob(glob_str, include, ignore_case=False)[source]
Return a function test_fn(path) which tests whether path matches glob, as per the Unix shell rules, taking as arguments a path, a glob string and include (0 indicating that the glob string is an exclude glob and 1 indicating that it is an include glob, returning:
0 - if the file should be excluded 1 - if the file should be included 2 - if the folder should be scanned for any included/excluded files None - if the selection function has nothing to say about the file
The basic idea is to turn glob_str into a regular expression, and just use the normal regular expression. There is a complication because the selection function should return ‘2’ (scan) for directories which may contain a file which matches the glob_str. So we break up the glob string into parts, and any file which matches an initial sequence of glob parts gets scanned.
Thanks to Donovan Baarda who provided some code which did some things similar to this.
Note: including a folder implicitly includes everything within it.
duplicity.gpg module
duplicity’s gpg interface, builds upon Frank Tobin’s GnuPGInterface which is now patched with some code for iterative threaded execution see duplicity’s README for details
- class duplicity.gpg.GPGFile(encrypt, encrypt_path, profile)[source]
Bases:
object
File-like object that encrypts decrypts another file on the fly
- __init__(encrypt, encrypt_path, profile)[source]
GPGFile initializer
If recipients is set, use public key encryption and encrypt to the given keys. Otherwise, use symmetric encryption.
encrypt_path is the Path of the gpg encrypted file. Right now only symmetric encryption/decryption is supported.
If passphrase is false, do not set passphrase - GPG program should prompt for it.
- class duplicity.gpg.GPGProfile(passphrase=None, sign_key=None, recipients=None, hidden_recipients=None)[source]
Bases:
object
Just hold some GPG settings, avoid passing tons of arguments
- __init__(passphrase=None, sign_key=None, recipients=None, hidden_recipients=None)[source]
Set all data with initializer
passphrase is the passphrase. If it is None (not “”), assume it hasn’t been set. sign_key can be blank if no signing is indicated, and recipients should be a list of keys. For all keys, the format should be an hex key like ‘AA0E73D2’.
- _version_re = re.compile(b'^gpg.*\\(GnuPG(?:/MacGPG2)?\\) (?P<maj>[0-9]+)\\.(?P<min>[0-9]+)\\.(?P<bug>[0-9]+)(-.+)?$')
- rc(flags=0)
Compile a regular expression pattern, returning a Pattern object.
- duplicity.gpg.GPGWriteFile(block_iter, filename, profile, size=209715200, max_footer_size=16384)[source]
Write GPG compressed file of given size
This function writes a gpg compressed file by reading from the input iter and writing to filename. When it has read an amount close to the size limit, it “tops off” the incoming data with incompressible data, to try to hit the limit exactly.
block_iter should have methods .next(size), which returns the next block of data, which should be at most size bytes long. Also .get_footer() returns a string to write at the end of the input file. The footer should have max length max_footer_size.
Because gpg uses compression, we don’t assume that putting bytes_in bytes into gpg will result in bytes_out = bytes_in out. However, do assume that bytes_out <= bytes_in approximately.
Returns true if succeeded in writing until end of block_iter.
- duplicity.gpg.GzipWriteFile(block_iter, filename, size=209715200, gzipped=True)[source]
Write gzipped compressed file of given size
This is like the earlier GPGWriteFile except it writes a gzipped file instead of a gpg’d file. This function is somewhat out of place, because it doesn’t deal with GPG at all, but it is very similar to GPGWriteFile so they might as well be defined together.
The input requirements on block_iter and the output is the same as GPGWriteFile (returns true if wrote until end of block_iter).
- duplicity.gpg.PlainWriteFile(block_iter, filename, size=209715200, gzipped=False)[source]
Write plain uncompressed file of given size
This is like the earlier GPGWriteFile except it writes a gzipped file instead of a gpg’d file. This function is somewhat out of place, because it doesn’t deal with GPG at all, but it is very similar to GPGWriteFile so they might as well be defined together.
The input requirements on block_iter and the output is the same as GPGWriteFile (returns true if wrote until end of block_iter).
duplicity.gpginterface module
Interface to GNU Privacy Guard (GnuPG)
- !!! This was renamed to gpginterface.py.
Please refer to duplicity’s README for the reason. !!!
gpginterface is a Python module to interface with GnuPG which based on GnuPGInterface by Frank J. Tobin. It concentrates on interacting with GnuPG via filehandles, providing access to control GnuPG via versatile and extensible means.
This module is based on GnuPG::Interface, a Perl module by the same author.
Normally, using this module will involve creating a GnuPG object, setting some options in it’s ‘options’ data member (which is of type Options), creating some pipes to talk with GnuPG, and then calling the run() method, which will connect those pipes to the GnuPG process. run() returns a Process object, which contains the filehandles to talk to GnuPG with.
Example code:
>>> import gpginterface
>>>
>>> plaintext = b"Three blind mice"
>>> passphrase = "This is the passphrase"
>>>
>>> gnupg = gpginterface.GnuPG()
>>> gnupg.options.armor = 1
>>> gnupg.options.meta_interactive = 0
>>> gnupg.options.extra_args.append('--no-secmem-warning')
>>>
>>> # Normally we might specify something in
>>> # gnupg.options.recipients, like
>>> # gnupg.options.recipients = [ '0xABCD1234', 'bob@foo.bar' ]
>>> # but since we're doing symmetric-only encryption, it's not needed.
>>> # If you are doing standard, public-key encryption, using
>>> # --encrypt, you will need to specify recipients before
>>> # calling gnupg.run()
>>>
>>> # First we'll encrypt the test_text input symmetrically
>>> p1 = gnupg.run(['--symmetric'],
... create_fhs=['stdin', 'stdout', 'passphrase'])
>>>
>>> ret = p1.handles['passphrase'].write(passphrase)
>>> p1.handles['passphrase'].close()
>>>
>>> ret = p1.handles['stdin'].write(plaintext)
>>> p1.handles['stdin'].close()
>>>
>>> ciphertext = p1.handles['stdout'].read()
>>> p1.handles['stdout'].close()
>>>
>>> # process cleanup
>>> p1.wait()
>>>
>>> # Now we'll decrypt what we just encrypted it,
>>> # using the convience method to get the
>>> # passphrase to GnuPG
>>> gnupg.passphrase = passphrase
>>>
>>> p2 = gnupg.run(['--decrypt'], create_fhs=['stdin', 'stdout'])
>>>
>>> ret = p2.handles['stdin'].write(ciphertext)
>>> p2.handles['stdin'].close()
>>>
>>> decrypted_plaintext = p2.handles['stdout'].read()
>>> p2.handles['stdout'].close()
>>>
>>> # process cleanup
>>> p2.wait()
>>>
>>> # Our decrypted plaintext:
>>> decrypted_plaintext
b'Three blind mice'
>>>
>>> # ...and see it's the same as what we orignally encrypted
>>> assert decrypted_plaintext == plaintext, "GnuPG decrypted output does not match original input"
>>>
>>>
>>> ##################################################
>>> # Now let's trying using run()'s attach_fhs paramter
>>>
>>> # we're assuming we're running on a unix...
>>> infp = open('/etc/manpaths', 'rb')
>>>
>>> p1 = gnupg.run(['--symmetric'], create_fhs=['stdout'],
... attach_fhs={'stdin': infp})
>>>
>>> # GnuPG will read the stdin from /etc/motd
>>> ciphertext = p1.handles['stdout'].read()
>>>
>>> # process cleanup
>>> p1.wait()
>>>
>>> # Now let's run the output through GnuPG
>>> # We'll write the output to a temporary file,
>>> import tempfile
>>> temp = tempfile.TemporaryFile()
>>>
>>> p2 = gnupg.run(['--decrypt'], create_fhs=['stdin'],
... attach_fhs={'stdout': temp})
>>>
>>> # give GnuPG our encrypted stuff from the first run
>>> ret = p2.handles['stdin'].write(ciphertext)
>>> p2.handles['stdin'].close()
>>>
>>> # process cleanup
>>> p2.wait()
>>>
>>> # rewind the tempfile and see what GnuPG gave us
>>> ret = temp.seek(0)
>>> decrypted_plaintext = temp.read()
>>>
>>> # compare what GnuPG decrypted with our original input
>>> ret = infp.seek(0)
>>> input_data = infp.read()
>>> assert decrypted_plaintext == input_data, "GnuPG decrypted output does not match original input"
To do things like public-key encryption, simply pass do something like:
gnupg.passphrase = ‘My passphrase’ gnupg.options.recipients = [ ‘bob@foobar.com’ ] gnupg.run( [’–sign’, ‘–encrypt’], create_fhs=…, attach_fhs=…)
Here is an example of subclassing gpginterface.GnuPG, so that it has an encrypt_string() method that returns ciphertext.
>>> import gpginterface
>>>
>>> class MyGnuPG(gpginterface.GnuPG):
...
... def __init__(self):
... super().__init__()
... self.setup_my_options()
...
... def setup_my_options(self):
... self.options.armor = 1
... self.options.meta_interactive = 0
... self.options.extra_args.append('--no-secmem-warning')
...
... def encrypt_string(self, string, recipients):
... gnupg.options.recipients = recipients # a list!
...
... proc = gnupg.run(['--encrypt'], create_fhs=['stdin', 'stdout'])
...
... proc.handles['stdin'].write(string)
... proc.handles['stdin'].close()
...
... output = proc.handles['stdout'].read()
... proc.handles['stdout'].close()
...
... proc.wait()
... return output
...
>>> gnupg = MyGnuPG()
>>> ciphertext = gnupg.encrypt_string(b"The secret", ['E477C232'])
>>>
>>> # just a small sanity test here for doctest
>>> import types
>>> assert isinstance(ciphertext, bytes), "What GnuPG gave back is not bytes!"
Here is an example of generating a key: >>> import gpginterface >>> gnupg = gpginterface.GnuPG() >>> gnupg.options.meta_interactive = 0 >>> >>> # We will be creative and use the logger filehandle to capture >>> # what GnuPG says this time, instead stderr; no stdout to listen to, >>> # but we capture logger to surpress the dry-run command. >>> # We also have to capture stdout since otherwise doctest complains; >>> # Normally you can let stdout through when generating a key. >>> >>> proc = gnupg.run([’–gen-key’], create_fhs=[‘stdin’, ‘stdout’, … ‘logger’]) >>> >>> ret = proc.handles[‘stdin’].write(b’’’Key-Type: DSA … Key-Length: 1024 … # We are only testing syntax this time, so dry-run … %dry-run … Subkey-Type: ELG-E … Subkey-Length: 1024 … Name-Real: Joe Tester … Name-Comment: with stupid passphrase … Name-Email: joe@foo.bar … Expire-Date: 2y … Passphrase: abc … %pubring foo.pub … %secring foo.sec … ‘’’) >>> >>> proc.handles[‘stdin’].close() >>> >>> report = proc.handles[‘logger’].read() >>> proc.handles[‘logger’].close() >>> >>> proc.wait()
COPYRIGHT:
Copyright (C) 2001 Frank J. Tobin, ftobin@neverending.org
LICENSE:
This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA or see http://www.gnu.org/copyleft/lesser.html
- class duplicity.gpginterface.GnuPG[source]
Bases:
object
Class instances represent GnuPG.
Instance attributes of a GnuPG object are:
call – string to call GnuPG with. Defaults to “gpg”
passphrase – Since it is a common operation to pass in a passphrase to GnuPG, and working with the passphrase filehandle mechanism directly can be mundane, if set, the passphrase attribute works in a special manner. If the passphrase attribute is set, and no passphrase file object is sent in to run(), then GnuPG instnace will take care of sending the passphrase to GnuPG, the executable instead of having the user sent it in manually.
options – Object of type gpginterface.Options. Attribute-setting in options determines the command-line options used when calling GnuPG.
- _attach_fork_exec(gnupg_commands, args, create_fhs, attach_fhs)[source]
This is like run(), but without the passphrase-helping (note that run() calls this).
- run(gnupg_commands, args=None, create_fhs=None, attach_fhs=None)[source]
Calls GnuPG with the list of string commands gnupg_commands, complete with prefixing dashes. For example, gnupg_commands could be ‘[”–sign”, “–encrypt”]’ Returns a gpginterface.Process object.
args is an optional list of GnuPG command arguments (not options), such as keyID’s to export, filenames to process, etc.
create_fhs is an optional list of GnuPG filehandle names that will be set as keys of the returned Process object’s ‘handles’ attribute. The generated filehandles can be used to communicate with GnuPG via standard input, standard output, the status-fd, passphrase-fd, etc.
- Valid GnuPG filehandle names are:
stdin
stdout
stderr
status
passphase
command
logger
The purpose of each filehandle is described in the GnuPG documentation.
attach_fhs is an optional dictionary with GnuPG filehandle names mapping to opened files. GnuPG will read or write to the file accordingly. For example, if ‘my_file’ is an opened file and ‘attach_fhs[stdin] is my_file’, then GnuPG will read its standard input from my_file. This is useful if you want GnuPG to read/write to/from an existing file. For instance:
f = open(“encrypted.gpg”) gnupg.run([”–decrypt”], attach_fhs={‘stdin’: f})
Using attach_fhs also helps avoid system buffering issues that can arise when using create_fhs, which can cause the process to deadlock.
If not mentioned in create_fhs or attach_fhs, GnuPG filehandles which are a std* (stdin, stdout, stderr) are defaulted to the running process’ version of handle. Otherwise, that type of handle is simply not used when calling GnuPG. For example, if you do not care about getting data from GnuPG’s status filehandle, simply do not specify it.
run() returns a Process() object which has a ‘handles’ which is a dictionary mapping from the handle name (such as ‘stdin’ or ‘stdout’) to the respective newly-created FileObject connected to the running GnuPG process. For instance, if the call was
process = gnupg.run([”–decrypt”], stdin=1)
after run returns ‘process.handles[“stdin”]’ is a FileObject connected to GnuPG’s standard input, and can be written to.
- class duplicity.gpginterface.Options[source]
Bases:
object
Objects of this class encompass options passed to GnuPG. This class is responsible for determining command-line arguments which are based on options. It can be said that a GnuPG object has-a Options object in its options attribute.
Attributes which correlate directly to GnuPG options:
Each option here defaults to false or None, and is described in GnuPG documentation.
Booleans (set these attributes to booleans)
armor
no_greeting
no_verbose
quiet
batch
always_trust
rfc1991
openpgp
force_v3_sigs
no_options
textmode
Strings (set these attributes to strings)
homedir
default_key
comment
compress_algo
options
Lists (set these attributes to lists)
recipients (*NOTE* plural of ‘recipient’)
encrypt_to
Meta options
Meta options are options provided by this module that do not correlate directly to any GnuPG option by name, but are rather bundle of options used to accomplish a specific goal, such as obtaining compatibility with PGP 5. The actual arguments each of these reflects may change with time. Each defaults to false unless otherwise specified.
meta_pgp_5_compatible – If true, arguments are generated to try to be compatible with PGP 5.x.
meta_pgp_2_compatible – If true, arguments are generated to try to be compatible with PGP 2.x.
meta_interactive – If false, arguments are generated to try to help the using program use GnuPG in a non-interactive environment, such as CGI scripts. Default is true.
extra_args – Extra option arguments may be passed in via the attribute extra_args, a list.
>>> import gpginterface >>> >>> gnupg = gpginterface.GnuPG() >>> gnupg.options.armor = 1 >>> gnupg.options.recipients = ['Alice', 'Bob'] >>> gnupg.options.extra_args = ['--no-secmem-warning'] >>> >>> # no need for users to call this normally; just for show here >>> gnupg.options.get_args() ['--armor', '--recipient', 'Alice', '--recipient', 'Bob', '--no-secmem-warning']
- class duplicity.gpginterface.Pipe(parent, child, direct)[source]
Bases:
object
simple struct holding stuff about pipes we use
- class duplicity.gpginterface.Process[source]
Bases:
object
Objects of this class encompass properties of a GnuPG process spawned by GnuPG.run().
# gnupg is a GnuPG object process = gnupg.run( [ ‘–decrypt’ ], stdout = 1 ) out = process.handles[‘stdout’].read() … os.waitpid( process.pid, 0 )
Data Attributes
handles – This is a map of filehandle-names to the file handles, if any, that were requested via run() and hence are connected to the running GnuPG process. Valid names of this map are only those handles that were requested.
pid – The PID of the spawned GnuPG process. Useful to know, since once should call os.waitpid() to clean up the process, especially if multiple calls are made to run().
- duplicity.gpginterface.threaded_waitpid(process)[source]
When started as a thread with the Process object, thread will execute an immediate waitpid() against the process pid and will collect the process termination info. This will allow us to reap child processes as soon as possible, thus freeing resources quickly.
duplicity.lazy module
Define some lazy data structures and functions acting on them
- class duplicity.lazy.ITRBranch[source]
Bases:
object
Helper class for IterTreeReducer above
There are five stub functions below: start_process, end_process, branch_process, fast_process, and can_fast_process. A class that subclasses this one will probably fill in these functions to do more.
- base_index = None
- caught_exception = None
- finished = None
- index = None
- start_successful = None
- class duplicity.lazy.Iter[source]
Bases:
object
Hold static methods for the manipulation of lazy iterators
- static equal(iter1, iter2, verbose=None, operator=<function Iter.<lambda>>)[source]
True if iterator 1 has same elements as iterator 2
Use equality operator, or == if it is unspecified.
- static multiplex(iter, num_of_forks, final_func=None, closing_func=None)[source]
Split a single iterater into a number of streams
The return val will be a list with length num_of_forks, each of which will be an iterator like iter. final_func is the function that will be called on each element in iter just as it is being removed from the buffer. closing_func is called when all the streams are finished.
- class duplicity.lazy.IterMultiplex2(iter)[source]
Bases:
object
Multiplex an iterator into 2 parts
This is a special optimized case of the Iter.multiplex function, used when there is no closing_func or final_func, and we only want to split it into 2. By profiling, this is a time sensitive class.
- class duplicity.lazy.IterTreeReducer(branch_class, branch_args)[source]
Bases:
object
Tree style reducer object for iterator - stolen from rdiff-backup
The indicies of a RORPIter form a tree type structure. This class can be used on each element of an iter in sequence and the result will be as if the corresponding tree was reduced. This tries to bridge the gap between the tree nature of directories, and the iterator nature of the connection between hosts and the temporal order in which the files are processed.
This will usually be used by subclassing ITRBranch below and then call the initializer below with the new class.
- __call__(*args)[source]
Process args, where args[0] is current position in iterator
Returns true if args successfully processed, false if index is not in the current tree and thus the final result is available.
Also note below we set self.index after doing the necessary start processing, in case there is a crash in the middle.
duplicity.librsync module
Provides a high-level interface to some librsync functions
This is a python wrapper around the lower-level _librsync module, which is written in C. The goal was to use C as little as possible…
- class duplicity.librsync.DeltaFile(signature, new_file)[source]
Bases:
LikeFile
File-like object which incrementally generates a librsync delta
- class duplicity.librsync.LikeFile(infile, need_seek=None)[source]
Bases:
object
File-like object used by SigFile, DeltaFile, and PatchFile
- check_file(file, need_seek=None)[source]
Raise type error if file doesn’t have necessary attributes
- maker = None
- mode = 'rb'
- class duplicity.librsync.PatchedFile(basis_file, delta_file)[source]
Bases:
LikeFile
File-like object which applies a librsync delta incrementally
- class duplicity.librsync.SigFile(infile, blocksize=duplicity._librsync.RS_DEFAULT_BLOCK_LEN)[source]
Bases:
LikeFile
File-like object which incrementally generates a librsync signature
- class duplicity.librsync.SigGenerator(blocksize=duplicity._librsync.RS_DEFAULT_BLOCK_LEN)[source]
Bases:
object
Calculate signature.
Input and output is same as SigFile, but the interface is like md5 module, not filelike object
- exception duplicity.librsync.librsyncError[source]
Bases:
Exception
Signifies error in internal librsync processing (bad signature, etc.)
underlying _librsync.librsyncError’s are regenerated using this class because the C-created exceptions are by default unPickleable. There is probably a way to fix this in _librsync, but this scheme was easier.
duplicity.log module
Log various messages depending on verbosity level
- class duplicity.log.DetailFormatter[source]
Bases:
Formatter
Formatter that creates messages in a syntax somewhat like syslog.
- __init__()[source]
Initialize the formatter with specified format strings.
Initialize the formatter either with the specified format string, or a default as described above. Allow for specialized date formatting with the optional datefmt argument. If datefmt is omitted, you get an ISO8601-like (or RFC 3339-like) format.
Use a style parameter of ‘%’, ‘{’ or ‘$’ to specify that you want to use one of %-formatting,
str.format()
({}
) formatting orstring.Template
formatting in your format string.Changed in version 3.2: Added the
style
parameter.
- format(record)[source]
Format the specified record as text.
The record’s attribute dictionary is used as the operand to a string formatting operation which yields the returned string. Before formatting the dictionary, a couple of preparatory steps are carried out. The message attribute of the record is computed using LogRecord.getMessage(). If the formatting string uses the time (as determined by a call to usesTime(), formatTime() is called to format the event time. If there is exception information, it is formatted using formatException() and appended to the message.
- duplicity.log.DupToLoggerLevel(verb)[source]
Convert duplicity level to the logging module’s system, where higher is more severe
- class duplicity.log.ErrFilter(name='')[source]
Bases:
Filter
Filter that only allows messages more important than warnings
- class duplicity.log.ErrorCode[source]
Bases:
object
Enumeration class to hold error code values. These values should never change, as frontends rely upon them. Don’t use 0 or negative numbers. This code is returned by duplicity to indicate which error occurred via both exit code and log.
- absolute_files_from = 72
- backend_code_error = 55
- backend_command_error = 54
- backend_error = 50
- backend_no_space = 53
- backend_not_found = 52
- backend_permission_denied = 51
- backup_dir_doesnt_exist = 13
- bad_archive_dir = 9
- bad_encrypt_key = 81
- bad_request = 48
- bad_sign_key = 80
- bad_url = 8
- boto_calling_format = 26
- boto_lib_too_old = 25
- boto_old_style = 24
- cant_open_filelist = 7
- command_line = 2
- connection_failed = 38
- deprecated_option = 10
- dpbx_nologin = 47
- empty_files_from = 73
- enryption_mismatch = 45
- exception = 30
- file_prefix_error = 14
- ftp_ncftp_missing = 27
- ftp_ncftp_too_old = 28
- ftps_lftp_missing = 43
- generic = 1
- get_freespace_failed = 34
- get_ulimit_failed = 36
- gio_not_available = 40
- globbing_error = 15
- gpg_failed = 31
- hostname_mismatch = 3
- inc_without_sigs = 17
- maxopen_too_low = 37
- mismatched_hash = 21
- mismatched_manifests = 5
- no_manifests = 4
- no_restore_files = 20
- no_sigs = 18
- not_enough_freespace = 35
- not_implemented = 33
- pythonoptimize_set = 46
- redundant_filter = 70
- redundant_inclusion = 16
- restart_file_not_found = 39
- restore_path_exists = 11
- restore_path_not_found = 19
- s3_bucket_not_style = 32
- s3_kms_no_id = 49
- source_path_mismatch = 42
- trailing_filter = 71
- unreadable_manifests = 6
- unsigned_volume = 22
- user_error = 23
- verify_dir_doesnt_exist = 12
- volume_wrong_size = 44
- class duplicity.log.InfoCode[source]
Bases:
object
Enumeration class to hold info code values. These values should never change, as frontends rely upon them. Don’t use 0 or negative numbers.
- asynchronous_upload_begin = 12
- asynchronous_upload_done = 14
- collection_status = 3
- diff_file_changed = 5
- diff_file_deleted = 6
- diff_file_new = 4
- file_list = 10
- generic = 1
- patch_file_patching = 8
- patch_file_writing = 7
- progress = 2
- skipping_socket = 15
- synchronous_upload_begin = 11
- synchronous_upload_done = 13
- upload_progress = 16
- duplicity.log.Log(s, verb_level, code=1, extra=None, force_print=False, transfer_progress=False)[source]
Write s to stderr if verbosity level low enough
- duplicity.log.LoggerToDupLevel(verb)[source]
Convert logging module level to duplicity’s system, where lower is more severe
- class duplicity.log.MachineFilter(name='')[source]
Bases:
Filter
Filter that only allows levels that are consumable by other processes.
- class duplicity.log.MachineFormatter[source]
Bases:
Formatter
Formatter that creates messages in a syntax easily consumable by other processes.
- __init__()[source]
Initialize the formatter with specified format strings.
Initialize the formatter either with the specified format string, or a default as described above. Allow for specialized date formatting with the optional datefmt argument. If datefmt is omitted, you get an ISO8601-like (or RFC 3339-like) format.
Use a style parameter of ‘%’, ‘{’ or ‘$’ to specify that you want to use one of %-formatting,
str.format()
({}
) formatting orstring.Template
formatting in your format string.Changed in version 3.2: Added the
style
parameter.
- format(record)[source]
Format the specified record as text.
The record’s attribute dictionary is used as the operand to a string formatting operation which yields the returned string. Before formatting the dictionary, a couple of preparatory steps are carried out. The message attribute of the record is computed using LogRecord.getMessage(). If the formatting string uses the time (as determined by a call to usesTime(), formatTime() is called to format the event time. If there is exception information, it is formatted using formatException() and appended to the message.
- class duplicity.log.OutFilter(name='')[source]
Bases:
Filter
Filter that only allows warning or less important messages
- class duplicity.log.PrettyProgressFormatter[source]
Bases:
Formatter
Formatter that overwrites previous progress lines on ANSI terminals
- __init__()[source]
Initialize the formatter with specified format strings.
Initialize the formatter either with the specified format string, or a default as described above. Allow for specialized date formatting with the optional datefmt argument. If datefmt is omitted, you get an ISO8601-like (or RFC 3339-like) format.
Use a style parameter of ‘%’, ‘{’ or ‘$’ to specify that you want to use one of %-formatting,
str.format()
({}
) formatting orstring.Template
formatting in your format string.Changed in version 3.2: Added the
style
parameter.
- format(record)[source]
Format the specified record as text.
The record’s attribute dictionary is used as the operand to a string formatting operation which yields the returned string. Before formatting the dictionary, a couple of preparatory steps are carried out. The message attribute of the record is computed using LogRecord.getMessage(). If the formatting string uses the time (as determined by a call to usesTime(), formatTime() is called to format the event time. If there is exception information, it is formatted using formatException() and appended to the message.
- last_record_was_progress = False
- duplicity.log.PrintCollectionChangesInSet(col_stats, set_index, force_print=False)[source]
Prints changes in the specified set to the log
- duplicity.log.PrintCollectionFileChangedStatus(col_stats, filepath, force_print=False)[source]
Prints a collection status to the log
- duplicity.log.PrintCollectionStatus(col_stats, force_print=False)[source]
Prints a collection status to the log
- duplicity.log.Progress(s, current, total=None)[source]
Shortcut used for progress messages (verbosity 5).
- duplicity.log.TransferProgress(progress, eta, changed_bytes, elapsed, speed, stalled)[source]
Shortcut used for upload progress messages (verbosity 5).
- class duplicity.log.WarningCode[source]
Bases:
object
Enumeration class to hold warning code values. These values should never change, as frontends rely upon them. Don’t use 0 or negative numbers.
- cannot_iterate = 8
- cannot_process = 12
- cannot_read = 10
- cannot_stat = 9
- ftp_ncftp_v320 = 7
- generic = 1
- incomplete_backup = 5
- no_sig_for_time = 11
- orphaned_backup = 6
- orphaned_sig = 2
- process_skipped = 13
- unmatched_sig = 4
- unnecessary_sig = 3
duplicity.manifest module
Create and edit manifest for session contents
- class duplicity.manifest.Manifest(fh=None)[source]
Bases:
object
List of volumes and information about each one
- __init__(fh=None)[source]
Create blank Manifest
@param fh: fileobj for manifest @type fh: DupPath
@rtype: Manifest @return: manifest
- add_volume_info(vi)[source]
Add volume info vi to manifest and write to manifest
@param vi: volume info to add @type vi: VolumeInfo
@return: void
- check_dirinfo()[source]
Return None if dirinfo is the same, otherwise error message
Does not raise an error message if hostname or local_dirname are not available.
@rtype: string @return: None or error message
- del_volume_info(vol_num)[source]
Remove volume vol_num from the manifest
@param vol_num: volume number to delete @type vi: int
@return: void
- get_containing_volumes(index_prefix)[source]
Return list of volume numbers that may contain index_prefix
- set_dirinfo()[source]
Set information about directory from config, and write to manifest file.
@rtype: Manifest @return: manifest
- exception duplicity.manifest.ManifestError[source]
Bases:
Exception
Exception raised when problem with manifest
- duplicity.manifest.Quote(s)[source]
Return quoted version of s safe to put in a manifest or volume info
- duplicity.manifest.Unquote(quoted_string)[source]
Return original string from quoted_string produced by above
- class duplicity.manifest.VolumeInfo[source]
Bases:
object
Information about a single volume
- contains(index_prefix, recursive=1)[source]
Return true if volume might contain index
If recursive is true, then return true if any index starting with index_prefix could be contained. Otherwise, just check if index_prefix itself is between starting and ending indicies.
- get_best_hash()[source]
Return pair (hash_type, hash_data)
SHA1 is the best hash, and MD5 is the second best hash. None is returned if no hash is available.
duplicity.patchdir module
- class duplicity.patchdir.IndexedTuple(index, sequence)[source]
Bases:
object
Like a tuple, but has .index (used previously by collate_iters)
- class duplicity.patchdir.Multivol_Filelike(tf, tar_iter, tarinfo_list, index)[source]
Bases:
object
Emulate a file like object from multivols
Maintains a buffer about the size of a volume. When it is read() to the end, pull in more volumes as desired.
- duplicity.patchdir.Patch(base_path, difftar_fileobj)[source]
Patch given base_path and file object containing delta
- duplicity.patchdir.Patch_from_iter(base_path, fileobj_iter, restrict_index=())[source]
Patch given base_path and iterator of delta file objects
- class duplicity.patchdir.PathPatcher(base_path)[source]
Bases:
ITRBranch
Used by DirPatch, process the given basis and diff
- class duplicity.patchdir.ROPath_IterWriter(base_path)[source]
Bases:
ITRBranch
Used in Write_ROPaths above
We need to use an ITR because we have to update the permissions/times of directories after we write the files in them.
- class duplicity.patchdir.TarFile_FromFileobjs(fileobj_iter)[source]
Bases:
object
Like a tarfile.TarFile iterator, but read from multiple fileobjs
- duplicity.patchdir.Write_ROPaths(base_path, rop_iter)[source]
Write out ropaths in rop_iter starting at base_path
Returns 1 if something was actually written, 0 otherwise.
- duplicity.patchdir.collate_iters(iter_list)[source]
Collate iterators by index
Input is a list of n iterators each of which must iterate elements with an index attribute. The elements must come out in increasing order, and the index should be a tuple itself.
The output is an iterator which yields tuples where all elements in the tuple have the same index, and the tuple has n elements in it. If any iterator lacks an element with that index, the tuple will have None in that spot.
- duplicity.patchdir.difftar2path_iter(diff_tarfile)[source]
Turn file-like difftarobj into iterator of ROPaths
- duplicity.patchdir.filter_path_iter(path_iter, index)[source]
Rewrite path elements of path_iter so they start with index
Discard any that doesn’t start with index, and remove the index prefix from the rest.
- duplicity.patchdir.get_index_from_tarinfo(tarinfo)[source]
Return (index, difftype, multivol) pair from tarinfo object
- duplicity.patchdir.integrate_patch_iters(iter_list)[source]
Combine a list of iterators of ropath patches
The iter_list should be sorted in patch order, and the elements in each iter_list need to be orderd by index. The output will be an iterator of the final ROPaths in index order.
- duplicity.patchdir.normalize_ps(patch_sequence)[source]
Given an sequence of ROPath deltas, remove blank and unnecessary
The sequence is assumed to be in patch order (later patches apply to earlier ones). A patch is unnecessary if a later one doesn’t require it (for instance, any patches before a “delete” are unnecessary).
- duplicity.patchdir.patch_diff_tarfile(base_path, diff_tarfile, restrict_index=())[source]
Patch given Path object using delta tarfile (as in tarfile.TarFile)
If restrict_index is set, ignore any deltas in diff_tarfile that don’t start with restrict_index.
duplicity.path module
Wrapper class around a file like “/usr/bin/env”
This class makes certain file operations more convenient and associates stat information with filenames
- class duplicity.path.DupPath(base, index=(), parseresults=None)[source]
Bases:
Path
Represent duplicity data files
Based on the file name, files that are compressed or encrypted will have different open() methods.
- class duplicity.path.Path(base, index=())[source]
Bases:
ROPath
Path class - wrapper around ordinary local files
Besides caching stat() results, this class organizes various file code.
- compare_recursive(other, verbose=None)[source]
Compare self to other Path, descending down directories
- get_canonical()[source]
Return string of canonical version of path
Remove “.”, and trailing slashes where possible. Note that it’s harder to remove “..”, as “foo/bar/..” is not necessarily “foo”, so we can’t use path.normpath()
- open(mode='rb')[source]
Return fileobj associated with self
Usually this is just the file data on disk, but can be replaced with arbitrary data using the setfileobj method.
- quote(s=None)[source]
Return quoted version of s (defaults to self.name)
The output is meant to be interpreted with shells, so can be used with os.system.
- regex_chars_to_quote = re.compile('[\\\\\\"\\$`]')
- class duplicity.path.PathDeleter[source]
Bases:
ITRBranch
Delete a directory. Called by Path.deltree
- class duplicity.path.ROPath(index, stat=None)[source]
Bases:
object
Read only Path
Objects of this class doesn’t represent real files, so they don’t have a name. They are required to be indexed though.
- compare_verbose(other, include_data=0)[source]
Compare ROPaths like __eq__, but log reason if different
This is placed in a separate function from __eq__ because __eq__ should be very time sensitive, and logging statements would slow it down. Used when verifying.
Only run if include_data is true.
duplicity.progress module
Functions to compute progress of compress & upload files The heuristics try to infer the ratio between the amount of data collected by the deltas and the total size of the changing files. It also infers the compression and encryption ration of the raw deltas before sending them to the backend. With the inferred ratios, the heuristics estimate the percentage of completion and the time left to transfer all the (yet unknown) amount of data to send. This is a forecast based on gathered evidence.
- class duplicity.progress.LogProgressThread[source]
Bases:
Thread
Background thread that reports progress to the log, every –progress-rate seconds
- __init__()[source]
This constructor should always be called with keyword arguments. Arguments are:
group should be None; reserved for future extension when a ThreadGroup class is implemented.
target is the callable object to be invoked by the run() method. Defaults to None, meaning nothing is called.
name is the thread name. By default, a unique name is constructed of the form “Thread-N” where N is a small decimal number.
args is the argument tuple for the target invocation. Defaults to ().
kwargs is a dictionary of keyword arguments for the target invocation. Defaults to {}.
If a subclass overrides the constructor, it must make sure to invoke the base class constructor (Thread.__init__()) before doing anything else to the thread.
- run()[source]
Method representing the thread’s activity.
You may override this method in a subclass. The standard run() method invokes the callable object passed to the object’s constructor as the target argument, if any, with sequential and keyword arguments taken from the args and kwargs arguments, respectively.
- class duplicity.progress.ProgressTracker[source]
Bases:
object
- annotate_written_bytes(bytecount)[source]
Annotate the number of bytes that have been added/changed since last time this function was called. bytecount param will show the number of bytes since the start of the current volume and for the current volume
- has_collected_evidence()[source]
Returns true if the progress computation is on and duplicity has not yet started the first dry-run pass to collect some information
- set_evidence(stats, is_full)[source]
Stores the collected statistics from a first-pass dry-run, to use this information later so as to estimate progress
duplicity.robust module
duplicity.selection module
- class duplicity.selection.Select(path)[source]
Bases:
object
Iterate appropriate Paths in given directory
This class acts as an iterator on account of its next() method. Basically, it just goes through all the files in a directory in order (depth-first) and subjects each file to a bunch of tests (selection functions) in order. The first test that includes or excludes the file means that the file gets included (iterated) or excluded. The default is include, so with no tests we would just iterate all the files in the directory in order.
The one complication to this is that sometimes we don’t know whether or not to include a directory until we examine its contents. For instance, if we want to include all the **.py files. If /home/ben/foo.py exists, we should also include /home and /home/ben, but if these directories contain no **.py files, they shouldn’t be included. For this reason, a test may not include or exclude a directory, but merely “scan” it. If later a file in the directory gets included, so does the directory.
As mentioned above, each test takes the form of a selection function. The selection function takes a path, and returns:
None - means the test has nothing to say about the related file 0 - the file is excluded by the test 1 - the file is included 2 - the test says the file (must be directory) should be scanned
Also, a selection function f has a variable f.exclude which should be true if f could potentially exclude some file. This is used to signal an error if the last function only includes, which would be redundant and presumably isn’t what the user intends.
- Iterate(path)[source]
Return iterator yielding paths in path
This function looks a bit more complicated than it needs to be because it avoids extra recursion (and no extra function calls for non-directory files) while still doing the “directory scanning” bit.
- ParseArgs(argtuples, filelists)[source]
Create selection functions based on list of tuples
The tuples are created when the initial commandline arguments are read. They have the form (option string, additional argument) except for the filelist tuples, which should be (option-string, (additional argument, filelist_fp)).
- add_selection_func(sel_func, add_to_start=None)[source]
Add another selection function at the end or beginning
- exclude_older_get_sf(date)[source]
Return selection function based on files older than modification date
- filelist_general_get_sfs(filelist_fp, inc_default, list_name, mode='globbing', ignore_case=False)[source]
Return list of selection functions by reading fileobj
filelist_fp should be an open file object inc_default is true if this is an include list list_name is just the name of the list, used for logging mode indicates whether to glob, regex, or not
- filelist_sanitise_line(line, include_default)[source]
Sanitises lines of both normal and globbing filelists, returning (line, include) and line=None if blank/comment
The aim is to parse filelists in a consistent way, prior to the interpretation of globbing statements. The function removes whitespace, comment lines and processes modifiers (leading +/-) and quotes.
- general_get_sf(pattern_str, include, mode='globbing', ignore_case=False)[source]
Return selection function given by a pattern string
The selection patterns are interpretted in accordance with the mode argument, “globbing”, “literal”, or “regex”.
The ‘ignorecase:’ prefix is a legacy feature which historically lived on the globbing code path and was only ever documented as working for globs.
- glob_get_sf(glob_str, include, ignore_case=False)[source]
Return selection function based on glob_str
- literal_get_sf(lit_str, include, ignore_case=False)[source]
Return a selection function that matches a literal string while still including the contents of any folders which are matched
- other_filesystems_get_sf(include)[source]
Return selection function matching files on other filesystems
- parse_files_from(filelist_fp, list_name)[source]
Loads an explicit list of files to backup from a filelist, building a dictionary of directories and their contents which can be used later to emulate a filesystem walk over the listed files only.
Each specified path is unwound to identify the parents folder(s) as these are implicitly to be included.
Paths read are not to be stripped, checked for comments, etc. Every character on each line is significant and treated as part of the path.
- present_get_sf(filename, include)[source]
Return selection function given by existence of a file in a directory
- regexp_get_sf(regexp_string, include, ignore_case=False)[source]
Return selection function given by regexp_string
- select_fn_from_literal(lit_str, include, ignore_case=False)[source]
Return a function test_fn(path) which test where a path matches a literal string. See also select_fn_from_blog() in globmatch.py
This function is separated from literal_get_sf() so that it can be used to test the prefix without creating a loop.
TODO: this doesn’t need to be part of the Select class type, but not sure where else to put it?
duplicity.statistics module
Generate and process backup statistics
- class duplicity.statistics.StatsDeltaProcess[source]
Bases:
StatsObj
Keep track of statistics during DirDelta process
- class duplicity.statistics.StatsObj[source]
Bases:
object
Contains various statistics, provide string conversion functions
- byte_abbrev_list = ((1099511627776, 'TB'), (1073741824, 'GB'), (1048576, 'MB'), (1024, 'KB'))
- get_byte_summary_string(byte_count)[source]
Turn byte count into human readable string like “7.23GB”
- space_regex = re.compile(' ')
- stat_attrs = ('Filename', 'StartTime', 'EndTime', 'ElapsedTime', 'Errors', 'TotalDestinationSizeChange', 'SourceFiles', 'SourceFileSize', 'NewFiles', 'NewFileSize', 'DeletedFiles', 'ChangedFiles', 'ChangedFileSize', 'ChangedDeltaSize', 'DeltaEntries', 'RawDeltaSize')
- stat_file_attrs = ('SourceFiles', 'SourceFileSize', 'NewFiles', 'NewFileSize', 'DeletedFiles', 'ChangedFiles', 'ChangedFileSize', 'ChangedDeltaSize', 'DeltaEntries', 'RawDeltaSize')
- stat_file_pairs = (('SourceFiles', False), ('SourceFileSize', True), ('NewFiles', False), ('NewFileSize', True), ('DeletedFiles', False), ('ChangedFiles', False), ('ChangedFileSize', True), ('ChangedDeltaSize', True), ('DeltaEntries', False), ('RawDeltaSize', True))
- stat_misc_attrs = ('Errors', 'TotalDestinationSizeChange')
- stat_time_attrs = ('StartTime', 'EndTime', 'ElapsedTime')
duplicity.tarfile module
Like system tarfile but with caching.
duplicity.tempdir module
Provides temporary file handling cenetered around a single top-level securely created temporary directory.
The public interface of this module is thread-safe.
- class duplicity.tempdir.TemporaryDirectory(temproot=None)[source]
Bases:
object
A temporary directory.
An instance of this class is backed by a directory in the file system created securely by the use of tempfile.mkdtemp(). Said instance can be used to obtain unique filenames inside of this directory for cases where mktemp()-like semantics is desired, or (recommended) an fd,filename pair for mkstemp()-like semantics.
See further below for the security implications of using it.
Each instance will keep a list of all files ever created by it, to faciliate deletion of such files and rmdir() of the directory itself. It does this in order to be able to clean out the directory without resorting to a recursive delete (ala rm -rf), which would be risky. Calling code can optionally (recommended) notify an instance of the fact that a tempfile was deleted, and thus need not be kept track of anymore.
This class serves two primary purposes:
Firstly, it provides a convenient single top-level directory in which all the clutter ends up, rather than cluttering up the root of the system temp directory itself with many files.
Secondly, it provides a way to get mktemp() style semantics for temporary file creation, with most of the risks gone. Specifically, since the directory itself is created securely, files in this directory can be (mostly) safely created non-atomically without the usual mktemp() security implications. However, in the presence of tmpwatch, tmpreaper, or similar mechanisms that will cause files in the system tempdir to expire, a security risk is still present because the removal of the TemporaryDirectory managed directory removes all protection it offers.
For this reason, use of mkstemp() is greatly preferred above use of mktemp().
In addition, since cleanup is in the form of deletion based on a list of filenames, completely independently of whether someone else already deleted the file, there exists a race here as well. The impact should however be limited to the removal of an ‘attackers’ file.
- __init__(temproot=None)[source]
Create a new TemporaryDirectory backed by a unique and securely created file system directory.
tempbase - The temp root directory, or None to use system default (recommended).
- cleanup()[source]
Cleanup any files created in the temporary directory (that have not been forgotten), and clean up the temporary directory itself.
On failure they are logged, but this method will not raise an exception.
- forget(fname)[source]
Forget about the given filename previously obtained through mktemp() or mkstemp(). This should be called after the file has been deleted, to stop a future cleanup() from trying to delete it.
Forgetting is only needed for scaling purposes; that is, to avoid n timefile creations from implying that n filenames are kept in memory. Typically this whould never matter in duplicity, but for niceness sake callers are recommended to use this method whenever possible.
- mkstemp()[source]
Returns a filedescriptor and a filename, as per os.mkstemp(), but located in the temporary directory and subject to tracking and automatic cleanup.
- duplicity.tempdir.default()[source]
Obtain the global default instance of TemporaryDirectory, creating it first if necessary. Failures are propagated to caller. Most callers are expected to use this function rather than instantiating TemporaryDirectory directly, unless they explicitly desdire to have their “own” directory for some reason.
This function is thread-safe.
duplicity.util module
Miscellaneous utilities.
- duplicity.util.copyfileobj(infp, outfp, byte_count=-1)[source]
Copy byte_count bytes from infp to outfp, or all if byte_count < 0
Returns the number of bytes actually written (may be less than byte_count if find eof. Does not close either fileobj.
- duplicity.util.csv_args_to_dict(arg)[source]
Given the string arg in single line csv format, split into pairs (key, val) and produce a dictionary from those key:val pairs.
- duplicity.util.escape(string)[source]
Convert a (bytes) filename to a format suitable for logging (quoted utf8)
- duplicity.util.exception_traceback(limit=50)[source]
- @return A string representation in typical Python format of the
currently active/raised exception.
- duplicity.util.ignore_missing(fn, filename)[source]
Execute fn on filename. Ignore ENOENT errors, otherwise raise exception.
@param fn: callable @param filename: string
- duplicity.util.maybe_ignore_errors(fn)[source]
Execute fn. If the global configuration setting ignore_errors is set to True, catch errors and log them but do continue (and return None).
@param fn: A callable. @return Whatever fn returns when called, or None if it failed and ignore_errors is true.
- duplicity.util.merge_dicts(*dict_args)[source]
Given any number of dictionaries, shallow copy and merge into a new dict, precedence goes to key-value pairs in latter dictionaries.