Welcome to duplicity’s documentation!

duplicity

duplicity package

Subpackages

duplicity.backends package
Submodules
duplicity.backends._boto_multi module
duplicity.backends._boto_single module
class duplicity.backends._boto_single.BotoBackend(parsed_url)[source]

Bases: Backend

Backend for Amazon’s Simple Storage System, (aka Amazon S3), though the use of the boto module, (http://code.google.com/p/boto/).

To make use of this backend you must set aws_access_key_id and aws_secret_access_key in your ~/.boto or /etc/boto.cfg with your Amazon Web Services key id and secret respectively. Alternatively you can export the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

__init__(parsed_url)[source]
_close()[source]
_delete(filename)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
_query(filename)[source]
_retry_cleanup()[source]
list_filenames_in_bucket()[source]
pre_process_download(remote_filename, wait=False)[source]
pre_process_download_batch(remote_filenames)[source]
resetConnection()[source]
upload(filename, key, headers)[source]
duplicity.backends._boto_single.get_connection(scheme, parsed_url, storage_uri)[source]
duplicity.backends._cf_cloudfiles module
class duplicity.backends._cf_cloudfiles.CloudFilesBackend(parsed_url)[source]

Bases: Backend

Backend for Rackspace’s CloudFiles

__init__(parsed_url)[source]
_delete(filename)[source]
_error_code(operation, e)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
_query(filename)[source]
duplicity.backends._cf_pyrax module
class duplicity.backends._cf_pyrax.PyraxBackend(parsed_url)[source]

Bases: Backend

Backend for Rackspace’s CloudFiles using Pyrax

__init__(parsed_url)[source]
_delete(filename)[source]
_error_code(operation, e)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
_query(filename)[source]
duplicity.backends.adbackend module
class duplicity.backends.adbackend.ADBackend(parsed_url)[source]

Bases: Backend

Backend for Amazon Drive. It communicates directly with Amazon Drive using their RESTful API and does not rely on externally setup software (like acd_cli).

CLIENT_ID = 'amzn1.application-oa2-client.791c9c2d78444e85a32eb66f92eb6bcc'
CLIENT_SECRET = '5b322c6a37b25f16d848a6a556eddcc30314fc46ae65c87068ff1bc4588d715b'
MULTIPART_BOUNDARY = 'DuplicityFormBoundaryd66364f7f8924f7e9d478e19cf4b871d114a1e00262542'
OAUTH_AUTHORIZE_URL = 'https://www.amazon.com/ap/oa'
OAUTH_REDIRECT_URL = 'https://breunig.xyz/duplicity/copy.html'
OAUTH_SCOPE = ['clouddrive:read_other', 'clouddrive:write']
OAUTH_TOKEN_PATH = '/home/docs/.duplicity_ad_oauthtoken.json'
OAUTH_TOKEN_URL = 'https://api.amazon.com/auth/o2/token'
__init__(parsed_url)[source]
_delete(remote_filename)[source]

Delete file from Amazon Drive

_get(remote_filename, local_path)[source]

Download file from Amazon Drive

_list()[source]

List files in Amazon Drive backup folder

_put(source_path, remote_filename)[source]

Upload a local file to Amazon Drive

_query(remote_filename)[source]

Retrieve file size info from Amazon Drive

get_file_id(remote_filename)[source]

Find id of remote file in backup target folder

initialize_oauth2_session()[source]

Setup or refresh oauth2 session with Amazon Drive

mkdir(parent_node_id, folder_name)[source]

Create a new folder as a child of a parent node

multipart_stream(metadata, source_path)[source]

Generator for multipart/form-data file upload from source file

raise_for_existing_file(remote_filename)[source]

Report error when file already existed in location and delete it

read_all_pages(url)[source]

Iterates over nodes API URL until all pages were read

resolve_backup_target()[source]

Resolve node id for remote backup target folder

duplicity.backends.azurebackend module
class duplicity.backends.azurebackend.AzureBackend(parsed_url)[source]

Bases: Backend

Backend for Azure Blob Storage Service

__init__(parsed_url)[source]
_delete(filename)[source]
_error_code(operation, e)[source]
_get(remote_filename, local_path)[source]
_get_or_create_container()[source]
_list()[source]
_put(source_path, remote_filename)[source]
_query(filename)[source]
_set_tier(remote_filename)[source]
duplicity.backends.azurebackend._is_valid_container_name(name)[source]

Check, whether the given name conforms to the rules as defined in https://docs.microsoft.com/en-us/rest/api/storageservices/naming-and-referencing-containers–blobs–and-metadata for valid names.

duplicity.backends.b2backend module
class duplicity.backends.b2backend.B2Backend(parsed_url)[source]

Bases: Backend

Backend for BackBlaze’s B2 storage service

__init__(parsed_url)[source]

Authorize to B2 api and set up needed variables

_delete(filename)[source]

Delete filename from remote server

_get(remote_filename, local_path)[source]

Download remote_filename to local_path

_list()[source]

List files on remote server

_put(source_path, remote_filename)[source]

Copy source_path to remote_filename

_query(filename)[source]

Get size info of filename

file_info(filename)[source]
class duplicity.backends.b2backend.B2ProgressListener[source]

Bases: object

bytes_completed(byte_count)[source]
close()[source]
set_total_bytes(total_byte_count)[source]
duplicity.backends.boxbackend module
class duplicity.backends.boxbackend.BoxBackend(parsed_url)[source]

Bases: Backend

__init__(parsed_url)[source]
_delete(filename)[source]

Deletes file from the specified remote path

_get(remote_filename, local_path)[source]

Downloads file from the specified remote path

_list()[source]

Lists files in the specified remote path

_put(source_path, remote_filename)[source]

Uploads file to the specified remote folder (tries to delete it first to make sure the new one can be uploaded)

_query_list(filename_list)[source]

Query metadata for a list of file

delete(remote_file)[source]

Delete file in box folder

download(remote_file, local_file)[source]

Download file in box folder

folder_contents()[source]

Lists files of a remote box path

get_box_client(parsed_url)[source]
get_file_id_from_filename(remote_filename)[source]

Get the fild id by its file name

get_id_from_path(remote_path, parent_id='0')[source]

Get the folder or file id from its path

makedirs(remote_path)[source]

Create folder(s) in a path if necessary

upload(remote_file, local_file)[source]

Upload local file to the box folder

duplicity.backends.cfbackend module
duplicity.backends.dpbxbackend module
class duplicity.backends.dpbxbackend.DPBXBackend(parsed_url)[source]

Bases: Backend

Connect to remote store using Dr*pB*x service

__init__(parsed_url)[source]
_close(*args)[source]

close backend session? no! just “flush” the data

_delete(*args)[source]
_error_code(operation, e)[source]
_get(*args)[source]
_list(*args)[source]
_put(*args)[source]
_query(*args)[source]
check_renamed_files(file_list)[source]
load_access_token()[source]
login()[source]
obtain_access_token()[source]
put_file_chunked(source_path, remote_path)[source]
put_file_small(source_path, remote_path)[source]
save_access_token(access_token)[source]
user_authenticated()[source]
duplicity.backends.dpbxbackend.command(login_required=True)[source]

a decorator for handling authentication and exceptions

duplicity.backends.dpbxbackend.log_exception(e)[source]
duplicity.backends.gdocsbackend module
class duplicity.backends.gdocsbackend.GDocsBackend(parsed_url)[source]

Bases: Backend

Connect to remote store using Google Google Documents List API

BACKUP_DOCUMENT_TYPE = 'application/binary'
ROOT_FOLDER_ID = 'folder%3Aroot'
__init__(parsed_url)[source]
_authorize(email, password, captcha_token=None, captcha_response=None)[source]
_delete(filename)[source]
_fetch_entries(folder_id, type, title=None)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
duplicity.backends.gdrivebackend module
class duplicity.backends.gdrivebackend.GDriveBackend(parsed_url)[source]

Bases: Backend

Connect to remote store using Google Drive API V3

MIN_RESUMABLE_UPLOAD = 5242880
PAGE_SIZE = 100
__init__(parsed_url)[source]
_delete(filename)[source]
_error_code(operation, error)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
_query(filename)[source]
file_by_name(filename)[source]
id_by_name(filename)[source]
duplicity.backends.giobackend module
class duplicity.backends.giobackend.GIOBackend(parsed_url)[source]

Bases: Backend

Use this backend when saving to a GIO URL. This is a bit of a meta-backend, in that it can handle multiple schemas. URLs look like schema://user@server/path.

__copy_file(source, target)
__copy_progress(*args, **kwargs)
__done_with_mount(fileobj, result, loop)
__init__(parsed_url)[source]
_delete(filename)[source]
_error_code(operation, e)[source]
_get(filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
_query(filename)[source]
duplicity.backends.giobackend.ensure_dbus()[source]
duplicity.backends.hsibackend module
class duplicity.backends.hsibackend.HSIBackend(parsed_url)[source]

Bases: Backend

__init__(parsed_url)[source]
_delete(filename)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
duplicity.backends.hubicbackend module
class duplicity.backends.hubicbackend.HubicBackend(parsed_url)[source]

Bases: PyraxBackend

Backend for Hubic using Pyrax

__init__(parsed_url)[source]
duplicity.backends.idrivedbackend module
class duplicity.backends.idrivedbackend.IDriveBackend(parsed_url)[source]

Bases: Backend

__init__(parsed_url)[source]
_close()[source]
_delete(remote_filename)[source]
_delete_list(filename_list)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
_query(filename)[source]
_query_list(filename_list)[source]
connect()[source]
list_raw()[source]
request(commandline)[source]
user_connected()[source]
duplicity.backends.imapbackend module
class duplicity.backends.imapbackend.ImapBackend(parsed_url)[source]

Bases: Backend

__init__(parsed_url)[source]
_close()[source]
_delete_list(filename_list)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
delete_single_mail(i)[source]
expunge()[source]
imapf(fun, *args)[source]
prepareBody(f, rname)[source]
resetConnection()[source]
duplicity.backends.jottacloudbackend module
class duplicity.backends.jottacloudbackend.JottaCloudBackend(parsed_url)[source]

Bases: Backend

Connect to remote store using JottaCloud API

__init__(parsed_url)[source]
_close()[source]
_delete(filename)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
_query(filename)[source]

Get size of filename

get_or_create_directory(directory_name)[source]
duplicity.backends.jottacloudbackend.get_duplicity_log_level()[source]

Get the current duplicity log level as a stdlib-compatible logging level

duplicity.backends.jottacloudbackend.get_jotta_device(jfs)[source]
duplicity.backends.jottacloudbackend.get_root_dir(jfs)[source]
duplicity.backends.jottacloudbackend.set_jottalib_log_handlers(handlers)[source]
duplicity.backends.jottacloudbackend.set_jottalib_logging_level(log_level)[source]
duplicity.backends.lftpbackend module
class duplicity.backends.lftpbackend.LFTPBackend(parsed_url)[source]

Bases: Backend

Connect to remote store using File Transfer Protocol

__init__(parsed_url)[source]
_delete(filename)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
duplicity.backends.localbackend module
class duplicity.backends.localbackend.LocalBackend(parsed_url)[source]

Bases: Backend

Use this backend when saving to local disk

Urls look like file://testfiles/output. Relative to root can be gotten with extra slash (file:///usr/local).

__init__(parsed_url)[source]
_delete(filename)[source]
_delete_list(filenames)[source]
_get(filename, local_path)[source]
_list()[source]
_move(source_path, remote_filename)[source]
_put(source_path, remote_filename)[source]
_query(filename)[source]
duplicity.backends.mediafirebackend module

MediaFire Duplicity Backend

class duplicity.backends.mediafirebackend.MediafireBackend(parsed_url)[source]

Bases: Backend

Use this backend when saving to MediaFire

URLs look like mf:/root/folder.

__init__(parsed_url)[source]
_build_uri(filename='')[source]

Build relative URI

_delete(filename)[source]

Delete single file

_delete_list(filename_list)[source]

Delete list of files

_get(filename, local_path)[source]

Download file

_list()[source]

List files in backup directory

_put(source_path, remote_filename=None)[source]

Upload file

_query(filename)[source]

Stat the remote file

duplicity.backends.megabackend module
class duplicity.backends.megabackend.MegaBackend(parsed_url)[source]

Bases: Backend

Connect to remote store using Mega.co.nz API

__init__(parsed_url)[source]
_check_binary_exists(cmd)[source]

checks that a specified command exists in the current path

_delete(filename)[source]

deletes remote

_get(remote_filename, local_path)[source]

downloads file from Mega

_list()[source]

list files in the backup folder

_makedir(path)[source]

creates a remote directory

_makedir_recursive(path)[source]

creates a remote directory (recursively the whole path), ingores errors

_put(source_path, remote_filename)[source]

uploads file to Mega (deletes it first, to ensure it does not exist)

delete(remote_file)[source]
download(remote_file, local_file)[source]
folder_contents(files_only=False)[source]

lists contents of a folder, optionally ignoring subdirectories

upload(local_file, remote_file)[source]
duplicity.backends.megav2backend module
class duplicity.backends.megav2backend.Megav2Backend(parsed_url)[source]

Bases: Backend

Backend for MEGA.nz cloud storage, only one that works for accounts created since Nov. 2018 See https://github.com/megous/megatools/issues/411 for more details

This MEGA backend resorts to official tools (MEGAcmd) as available at https://mega.nz/cmd MEGAcmd works through a single binary called “mega-cmd”, which talks to a backend server “mega-cmd-server”, which keeps state (for example, persisting a session). Multiple “mega-*” shell wrappers (ie. “mega-ls”) exist as the user interface to “mega-cmd” and MEGA API The full MEGAcmd User Guide can be found in the software’s GitHub page below : https://github.com/meganz/MEGAcmd/blob/master/UserGuide.md

__init__(parsed_url)[source]
_check_binary_exists(cmd)[source]

Checks that a specified command exists in the running user command path

_close()[source]

Function called when backend is done being used

_delete(filename)[source]

Deletes file from the specified remote path

_get(remote_filename, local_path)[source]

Downloads file from the specified remote path

_list()[source]

Lists files in the specified remote path

_makedir(path)[source]

Creates a remote directory (recursively if necessary)

_put(source_path, remote_filename)[source]

Uploads file to the specified remote folder (tries to delete it first to make sure the new one can be uploaded)

delete(remote_file)[source]

Deletes a file from a remote MEGA path

download(remote_file, local_file)[source]

Downloads a file from a remote MEGA path

folder_contents(files_only=False)[source]

Lists contents of a remote MEGA path, optionally ignoring subdirectories

mega_login()[source]

Helper function to call from each method interacting with MEGA to make sure a session already exists or one is created to start with

upload(local_file, remote_file)[source]

Uploads a file to a remote MEGA path

duplicity.backends.megav3backend module
class duplicity.backends.megav3backend.Megav3Backend(parsed_url)[source]

Bases: Backend

Backend for MEGA.nz cloud storage, only one that works for accounts created since Nov. 2018 See https://github.com/megous/megatools/issues/411 for more details

This MEGA backend resorts to official tools (MEGAcmd) as available at https://mega.nz/cmd MEGAcmd works through a single binary called “mega-cmd”, which keeps state (for example, persisting a session). Multiple “mega-*” shell wrappers (ie. “mega-ls”) exist as the user interface to “mega-cmd” and MEGA API The full MEGAcmd User Guide can be found in the software’s GitHub page below : https://github.com/meganz/MEGAcmd/blob/master/UserGuide.md

__init__(parsed_url)[source]
_check_binary_exists(cmd)[source]

Checks that a specified command exists in the running user command path

_close()[source]

Function called when backend is done being used

_delete(filename)[source]

Deletes file from the specified remote path

_get(remote_filename, local_path)[source]

Downloads file from the specified remote path

_list()[source]

Lists files in the specified remote path

_makedir(path)[source]

Creates a remote directory (recursively if necessary)

_put(source_path, remote_filename)[source]

Uploads file to the specified remote folder (tries to delete it first to make sure the new one can be uploaded)

delete(remote_file)[source]

Deletes a file from a remote MEGA path

download(remote_file, local_file)[source]

Downloads a file from a remote MEGA path

ensure_mega_cmd_running()[source]

Trigger any mega command to ensure mega-cmd server is running

folder_contents(files_only=False)[source]

Lists contents of a remote MEGA path, optionally ignoring subdirectories

mega_login()[source]

Helper function to check existing session exists

upload(local_file, remote_file)[source]

Uploads a file to a remote MEGA path

duplicity.backends.multibackend module
class duplicity.backends.multibackend.MultiBackend(parsed_url)[source]

Bases: Backend

Store files across multiple remote stores. URL is a path to a local file containing URLs/other config defining the remote store

__affinities = {}
__init__(parsed_url)[source]
__knownQueryParameters = frozenset({'mode', 'onfail', 'subpath'})
__mode = 'stripe'
__mode_allowedSet = frozenset({'mirror', 'stripe'})
__onfail_mode = 'continue'
__onfail_mode_allowedSet = frozenset({'abort', 'continue'})
__stores = []
__subpath = ''
__write_cursor = 0
_delete(filename)[source]
_delete_list(filenames)[source]
_eligible_stores(filename)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
static get_query_params(parsed_url)[source]
pre_process_download(filename)[source]
pre_process_download_batch(filenames)[source]
duplicity.backends.ncftpbackend module
class duplicity.backends.ncftpbackend.NCFTPBackend(parsed_url)[source]

Bases: Backend

Connect to remote store using File Transfer Protocol

__init__(parsed_url)[source]
_delete(filename)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
duplicity.backends.onedrivebackend module
class duplicity.backends.onedrivebackend.DefaultOAuth2Session(api_uri)[source]

Bases: OneDriveOAuth2Session

A possibly-interactive console session using a built-in API key

CLIENT_ID = '000000004C12E85D'
OAUTH_AUTHORIZE_URI = 'https://login.live.com/oauth20_authorize.srf'
OAUTH_REDIRECT_URI = 'https://login.live.com/oauth20_desktop.srf'
OAUTH_SCOPE = ['Files.Read', 'Files.ReadWrite', 'User.Read', 'offline_access']
OAUTH_TOKEN_PATH = '/home/docs/.duplicity_onedrive_oauthtoken.json'
__init__(api_uri)[source]
token_updater(token)[source]
class duplicity.backends.onedrivebackend.ExternalOAuth2Session(client_id, refresh_token)[source]

Bases: OneDriveOAuth2Session

Caller is managing tokens and provides an active refresh token.

__init__(client_id, refresh_token)[source]
class duplicity.backends.onedrivebackend.OneDriveBackend(parsed_url)[source]

Bases: Backend

Uses Microsoft OneDrive (formerly SkyDrive) for backups.

API_URI = 'https://graph.microsoft.com/v1.0/'
REQUIRED_FRAGMENT_SIZE_MULTIPLE = 327680
__init__(parsed_url)[source]
_delete(remote_filename)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
_query(remote_filename)[source]
_retry_cleanup()[source]
initialize_oauth2_session()[source]
class duplicity.backends.onedrivebackend.OneDriveOAuth2Session[source]

Bases: object

A tiny wrapper for OAuth2Session that handles some OneDrive details.

OAUTH_TOKEN_URI = 'https://login.live.com/oauth20_token.srf'
__init__()[source]
delete(*args, **kwargs)[source]
get(*args, **kwargs)[source]
post(*args, **kwargs)[source]
put(*args, **kwargs)[source]
duplicity.backends.par2backend module
class duplicity.backends.par2backend.Par2Backend(parsed_url)[source]

Bases: Backend

This backend wrap around other backends and create Par2 recovery files before the file and the Par2 files are transfered with the wrapped backend.

If a received file is corrupt it will try to repair it on the fly.

__init__(parsed_url)[source]
close()[source]
delete(filename)[source]

delete given filename and its .par2 files

delete_list(filename_list)[source]

delete given filename_list and all .par2 files that belong to them

error_code(operation, e)[source]
get(remote_filename, local_path)[source]

transfer remote_filename and the related .par2 file into a temp-dir. remote_filename will be renamed into local_path before finishing.

If “par2 verify” detect an error transfer the Par2-volumes into the temp-dir and try to repair.

list()[source]

Return list of filenames (byte strings) present in backend

Files ending with “.par2” will be excluded from the list.

move(local, remote)[source]
put(local, remote)[source]
query(filename)[source]
query_list(filename_list)[source]
retry_cleanup()[source]
transfer(method, source_path, remote_filename)[source]

create Par2 files and transfer the given file and the Par2 files with the wrapped backend.

Par2 must run on the real filename or it would restore the temp-filename later on. So first of all create a tempdir and symlink the soure_path with remote_filename into this.

unfiltered_list()[source]
duplicity.backends.pcabackend module
class duplicity.backends.pcabackend.PCABackend(parsed_url)[source]

Bases: Backend

Backend for OVH PCA

__init__(parsed_url)[source]
__list_objs(ffilter=None)
_delete(filename)[source]
_error_code(operation, e)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
_query(filename)[source]
pre_process_download_batch(remote_filenames)[source]

This is called before downloading volumes from this backend by main engine. For PCA, volumes passed as argument need to be unsealed. This method is blocking, showing a status at regular interval

unseal(remote_filename)[source]
unseal_status(u_remote_filenames)[source]

Shows unsealing status for input volumes

duplicity.backends.pydrivebackend module
class duplicity.backends.pydrivebackend.PyDriveBackend(parsed_url)[source]

Bases: Backend

Connect to remote store using PyDrive API

__init__(parsed_url)[source]
_delete(filename)[source]
_error_code(operation, error)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
_query(filename)[source]
file_by_name(filename)[source]
id_by_name(filename)[source]
duplicity.backends.rclonebackend module
class duplicity.backends.rclonebackend.RcloneBackend(parsed_url)[source]

Bases: Backend

__init__(parsed_url)[source]
_delete(remote_filename)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
_subprocess_safe_popen(commandline)[source]
duplicity.backends.rsyncbackend module
class duplicity.backends.rsyncbackend.RsyncBackend(parsed_url)[source]

Bases: Backend

Connect to remote store using rsync

rsync backend contributed by Sebastian Wilhelmi <seppi@seppi.de> rsyncd auth, alternate port support Copyright 2010 by Edgar Soldin <edgar.soldin@web.de>

__init__(parsed_url)[source]

rsyncBackend initializer

_delete_list(filename_list)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
get_rsync_path()[source]
over_rsyncd()[source]
duplicity.backends.s3_boto3_backend module
class duplicity.backends.s3_boto3_backend.S3Boto3Backend(parsed_url)[source]

Bases: Backend

Backend for Amazon’s Simple Storage System, (aka Amazon S3), though the use of the boto3 module. (See https://boto3.amazonaws.com/v1/documentation/api/latest/index.html for information on boto3.)

Pursuant to Amazon’s announced deprecation of path style S3 access, this backend only supports virtual host style bucket URIs. See the man page for full details.

To make use of this backend, you must provide AWS credentials. This may be done in several ways: through the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, by the ~/.aws/credentials file, by the ~/.aws/config file, or by using the boto2 style ~/.boto or /etc/boto.cfg files.

__init__(parsed_url)[source]
_delete(remote_filename)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(local_source_path, remote_filename)[source]
_query(remote_filename)[source]
reset_connection()[source]
class duplicity.backends.s3_boto3_backend.UploadProgressTracker[source]

Bases: object

__init__()[source]
progress_cb(fresh_byte_count)[source]
duplicity.backends.s3_boto_backend module
duplicity.backends.slatebackend module
class duplicity.backends.slatebackend.SlateBackend(parsed_url)[source]

Bases: Backend

Backend for Slate

__init__(parsed_url)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
duplicity.backends.ssh_paramiko_backend module
class duplicity.backends.ssh_paramiko_backend.SSHParamikoBackend(parsed_url)[source]

Bases: Backend

This backend accesses files using the sftp or scp protocols. It does not need any local client programs, but an ssh server and the sftp program must be installed on the remote side (or with scp, the programs scp, ls, mkdir, rm and a POSIX-compliant shell).

Authentication keys are requested from an ssh agent if present, then ~/.ssh/id_rsa/dsa are tried. If -oIdentityFile=path is present in –ssh-options, then that file is also tried. The passphrase for any of these keys is taken from the URI or FTP_PASSWORD. If none of the above are available, password authentication is attempted (using the URI or FTP_PASSWORD).

Missing directories on the remote side will be created.

If scp is active then all operations on the remote side require passing arguments through a shell, which introduces unavoidable quoting issues: directory and file names that contain single quotes will not work. This problem does not exist with sftp.

__init__(parsed_url)[source]
_delete(filename)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
gethostconfig(file, host)[source]
runremote(cmd, ignoreexitcode=False, errorprefix='')[source]

small convenience function that opens a shell channel, runs remote command and returns stdout of command. throws an exception if exit code!=0 and not ignored

duplicity.backends.ssh_pexpect_backend module
class duplicity.backends.ssh_pexpect_backend.SSHPExpectBackend(parsed_url)[source]

Bases: Backend

This backend copies files using scp. List not supported. Filenames should not need any quoting or this will break.

__init__(parsed_url)[source]

scpBackend initializer

_delete(filename)[source]
_delete_list(filename_list)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
get_scp(remote_filename, local_path)[source]
get_sftp(remote_filename, local_path)[source]
put_scp(source_path, remote_filename)[source]
put_sftp(source_path, remote_filename)[source]
run_scp_command(commandline)[source]

Run an scp command, responding to password prompts

run_sftp_command(commandline, commands)[source]

Run an sftp command, responding to password prompts, passing commands from list

duplicity.backends.swiftbackend module
class duplicity.backends.swiftbackend.SwiftBackend(parsed_url)[source]

Bases: Backend

Backend for Swift

__init__(parsed_url)[source]
_delete(filename)[source]
_error_code(operation, e)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
_query(filename)[source]
duplicity.backends.sxbackend module
class duplicity.backends.sxbackend.SXBackend(parsed_url)[source]

Bases: Backend

Connect to remote store using Skylable Protocol

__init__(parsed_url)[source]
_delete(filename)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
duplicity.backends.tahoebackend module
class duplicity.backends.tahoebackend.TAHOEBackend(parsed_url)[source]

Bases: Backend

Backend for the Tahoe file system

__init__(parsed_url)[source]
_delete(filename)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
get_remote_path(filename=None)[source]
run(*args)[source]
duplicity.backends.webdavbackend module
class duplicity.backends.webdavbackend.CustomMethodRequest(method, *args, **kwargs)[source]

Bases: Request

This request subclass allows explicit specification of the HTTP request method. Basic urllib.request.Request class chooses GET or POST depending on self.has_data()

__init__(method, *args, **kwargs)[source]
get_method()[source]

Return a string indicating the HTTP request method.

class duplicity.backends.webdavbackend.VerifiedHTTPSConnection(*args, **kwargs)[source]

Bases: HTTPSConnection

__init__(*args, **kwargs)[source]
connect()[source]

Connect to a host on a given (SSL) port.

request(*args, **kwargs)[source]

Send a complete request to the server.

class duplicity.backends.webdavbackend.WebDAVBackend(parsed_url)[source]

Bases: Backend

Backend for accessing a WebDAV repository.

webdav backend contributed in 2006 by Jesper Zedlitz <jesper@zedlitz.de>

__init__(parsed_url)[source]
_close()[source]
_delete(filename)[source]
_get(remote_filename, local_path)[source]
_list()[source]
_put(source_path, remote_filename)[source]
_retry_cleanup()[source]
connect(forced=False)[source]

Connect or re-connect to the server, updates self.conn # reconnect on errors as a precaution, there are errors e.g. # “[Errno 32] Broken pipe” or SSl errors that render the connection unusable

getText(nodelist)[source]
get_authorization(response, path)[source]

Fetches the auth header based on the requested method (basic or digest)

get_basic_authorization()[source]

Returns the basic auth header

get_digest_authorization(path)[source]

Returns the digest auth header

get_kerberos_authorization()[source]
listbody = '<?xml version="1.0"?><D:propfind xmlns:D="DAV:"><D:prop><D:resourcetype/></D:prop></D:propfind>'

Connect to remote store using WebDAV Protocol

makedir()[source]

Make (nested) directories on the server.

parse_digest_challenge(challenge_string)[source]
request(method, path, data=None, redirected=0)[source]

Wraps the connection.request method to retry once if authentication is required

sanitize_path(path)[source]
taste_href(href)[source]

Internal helper to taste the given href node and, if it is a duplicity file, collect it as a result file.

@return: A matching filename, or None if the href did not match.

Module contents

Imports of backends should not be done directly in this module. All backend imports are done via import_backends() in backend.py. This file is only to instantiate the duplicity.backends module itself.

Submodules

duplicity.asyncscheduler module

Asynchronous job scheduler, for concurrent execution with minimalistic dependency guarantees.

class duplicity.asyncscheduler.AsyncScheduler(concurrency)[source]

Bases: object

Easy-to-use scheduler of function calls to be executed concurrently. A very simple dependency mechanism exists in the form of barriers (see insert_barrier()).

Each instance has a concurrency level associated with it. A concurrency of 0 implies that all tasks will be executed synchronously when scheduled. A concurrency of 1 indicates that a task will be executed asynchronously, but never concurrently with other tasks. Both 0 and 1 guarantee strict ordering among all tasks (i.e., they will be executed in the order scheduled).

At concurrency levels above 1, the tasks will end up being executed in an order undetermined except insofar as is enforced by calls to insert_barrier().

An AsynchScheduler should be created for any independent process; the scheduler will assume that if any background job fails (raises an exception), it makes further work moot.

__execute_caller(caller)
__init__(concurrency)[source]

Create an asynchronous scheduler that executes jobs with the given level of concurrency.

__run_asynchronously(fn, params)
__run_synchronously(fn, params)
__start_worker(caller)

Start a new worker.

insert_barrier()[source]

Proclaim that any tasks scheduled prior to the call to this method MUST be executed prior to any tasks scheduled after the call to this method.

The intended use case is that if task B depends on A, a barrier must be inserted in between to guarantee that A happens before B.

schedule_task(fn, params)[source]

Schedule the given task (callable, typically function) for execution. Pass the given parameters to the function when calling it. Returns a callable which can optionally be used to wait for the task to complete, either by returning its return value or by propagating any exception raised by said task.

This method may block or return immediately, depending on the configuration and state of the scheduler.

This method may also raise an exception in order to trigger failures early, if the task (if run synchronously) or a previous task has already failed.

NOTE: Pay particular attention to the scope in which this is called. In particular, since it will execute concurrently in the background, assuming fn is a closure, any variables used most be properly bound in the closure. This is the reason for the convenience feature of being able to give parameters to the call, to avoid having to wrap the call itself in a function in order to “fixate” variables in, for example, an enclosing loop.

wait()[source]

Wait for the scheduler to become entirely empty (i.e., all tasks having run to completion).

IMPORTANT: This is only useful with a single caller scheduling tasks, such that no call to schedule_task() is currently in progress or may happen subsequently to the call to wait().

duplicity.backend module

Provides a common interface to all backends and certain sevices intended to be used by the backends themselves.

class duplicity.backend.Backend(parsed_url)[source]

Bases: object

See README in backends directory for information on how to write a backend.

__init__(parsed_url)[source]
__subprocess_popen(args)

For internal use. Execute the given command line, interpreted as a shell command. Returns int Exitcode, string StdOut, string StdErr

get_password()[source]

Return a password for authentication purposes. The password will be obtained from the backend URL, the environment, by asking the user, or by some other method. When applicable, the result will be cached for future invocations.

munge_password(commandline)[source]

Remove password from commandline by substituting the password found in the URL, if any, with a generic place-holder.

This is intended for display purposes only, and it is not guaranteed that the results are correct (i.e., more than just the ‘:password@’ may be substituted.

popen_breaks = {}
subprocess_popen(commandline)[source]

Execute the given command line with error check. Returns int Exitcode, string StdOut, string StdErr

Raise a BackendException on failure.

use_getpass = True
class duplicity.backend.BackendWrapper(backend)[source]

Bases: object

Represents a generic duplicity backend, capable of storing and retrieving files.

__do_put(source_path, remote_filename)
__init__(backend)[source]
_do_delete(*args)[source]
_do_delete_list(*args)[source]
_do_query(*args)[source]
_do_query_list(*args)[source]
close()[source]

Close the backend, releasing any resources held and invalidating any file objects obtained from the backend.

delete(filename_list)[source]

Delete each filename in filename_list, in order if possible.

get(*args)[source]
get_data(filename, parseresults=None)[source]

Retrieve a file from backend, process it, return contents.

get_fileobj_read(filename, parseresults=None)[source]

Return fileobject opened for reading of filename on backend

The file will be downloaded first into a temp file. When the returned fileobj is closed, the temp file will be deleted.

list(*args)[source]
move(*args)[source]
pre_process_download(remote_filename)[source]

Manages remote access before downloading files (unseal data in cold storage for instance)

pre_process_download_batch(remote_filenames)[source]

Manages remote access before downloading files (unseal data in cold storage for instance)

put(*args)[source]
query_info(filename_list)[source]

Return metadata about each filename in filename_list

class duplicity.backend.ParsedUrl(url_string)[source]

Bases: object

Parse the given URL as a duplicity backend URL.

Returns the data of a parsed URL with the same names as that of the standard urlparse.urlparse() except that all values have been resolved rather than deferred. There are no get_* members. This makes sure that the URL parsing errors are detected early.

Raise InvalidBackendURL on invalid URL’s

__init__(url_string)[source]
geturl()[source]
strip_auth()[source]
duplicity.backend._get_code_from_exception(backend, operation, e)[source]
duplicity.backend.get_backend(url_string)[source]

Instantiate a backend suitable for the given URL, or return None if the given string looks like a local path rather than a URL.

Raise InvalidBackendURL if the URL is not a valid URL.

duplicity.backend.get_backend_object(url_string)[source]

Find the right backend class instance for the given URL, or return None if the given string looks like a local path rather than a URL.

Raise InvalidBackendURL if the URL is not a valid URL.

duplicity.backend.import_backends()[source]

Import files in the duplicity/backends directory where the filename ends in ‘backend.py’ and ignore the rest.

@rtype: void @return: void

duplicity.backend.is_backend_url(url_string)[source]

@return Whether the given string looks like a backend URL.

duplicity.backend.register_backend(scheme, backend_factory)[source]

Register a given backend factory responsible for URL:s with the given scheme.

The backend must be a callable which, when called with a URL as the single parameter, returns an object implementing the backend protocol (i.e., a subclass of Backend).

Typically the callable will be the Backend subclass itself.

This function is not thread-safe and is intended to be called during module importation or start-up.

duplicity.backend.register_backend_prefix(scheme, backend_factory)[source]

Register a given backend factory responsible for URL:s with the given scheme prefix.

The backend must be a callable which, when called with a URL as the single parameter, returns an object implementing the backend protocol (i.e., a subclass of Backend).

Typically the callable will be the Backend subclass itself.

This function is not thread-safe and is intended to be called during module importation or start-up.

duplicity.backend.retry(operation, fatal=True)[source]
duplicity.backend.strip_auth_from_url(parsed_url)[source]

Return a URL from a urlparse object without a username or password.

duplicity.backend.strip_prefix(url_string, prefix_scheme)[source]

strip the prefix from a string e.g. par2+ftp://… -> ftp://…

duplicity.cached_ops module

Cache-wrapped functions for grp and pwd lookups.

class duplicity.cached_ops.CachedCall(f)[source]

Bases: object

Decorator for caching the results of function calls.

__call__(*args)[source]

Call self as a function.

__init__(f)[source]
duplicity.cli_data module

Data for parse command line, check for consistency, and set config

class duplicity.cli_data.CommandAliases[source]

Bases: object

commands and aliases

__init__() None
backup = ['back', 'bu']
cleanup = ['clean', 'cl']
collection_status = ['stat', 'st']
full = ['fb']
incremental = ['inc', 'ib']
list_current_files = ['list', 'ls']
remove_all_but_n_full = ['rmfull', 'rf']
remove_all_inc_of_but_n_full = ['rminc', 'ri']
remove_older_than = ['rmolder', 'ro']
restore = ['rest', 'rb']
verify = ['veri', 'vb']
class duplicity.cli_data.CommandOptions[source]

Bases: object

legal options by command

__init__() None
backup = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--volsize', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--filter-regexp', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--files-from', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--exclude-regexp', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--filter-literal', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--filter-strictcase', '--exclude-if-present', '--gpg-options', '--num-retries', '--s3-kms-grant', '--exclude-filelist', '--mp-segment-size', '--timeout', '--s3-use-glacier', '--ignore-errors', '--exclude', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--include', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--include-regexp', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--filter-ignorecase', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--dry-run', '--include-filelist', '--asynchronous-upload', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--filter-globbing', '--copy-links', '--s3-kms-key-id', '--name', '--s3-use-multiprocessing']
cleanup = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--gpg-options', '--num-retries', '--s3-kms-grant', '--mp-segment-size', '--timeout', '--name', '--s3-use-glacier', '--ignore-errors', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--copy-links', '--s3-kms-key-id', '--exclude-if-present', '--s3-use-multiprocessing']
collection_status = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--gpg-options', '--num-retries', '--s3-kms-grant', '--mp-segment-size', '--timeout', '--name', '--s3-use-glacier', '--ignore-errors', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--copy-links', '--s3-kms-key-id', '--exclude-if-present', '--s3-use-multiprocessing']
full = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--volsize', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--filter-regexp', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--files-from', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--exclude-regexp', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--filter-literal', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--filter-strictcase', '--exclude-if-present', '--gpg-options', '--num-retries', '--s3-kms-grant', '--exclude-filelist', '--mp-segment-size', '--timeout', '--s3-use-glacier', '--ignore-errors', '--exclude', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--include', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--include-regexp', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--filter-ignorecase', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--dry-run', '--include-filelist', '--asynchronous-upload', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--filter-globbing', '--copy-links', '--s3-kms-key-id', '--name', '--s3-use-multiprocessing']
incremental = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--volsize', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--filter-regexp', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--files-from', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--exclude-regexp', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--filter-literal', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--filter-strictcase', '--exclude-if-present', '--gpg-options', '--num-retries', '--s3-kms-grant', '--exclude-filelist', '--mp-segment-size', '--timeout', '--s3-use-glacier', '--ignore-errors', '--exclude', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--include', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--include-regexp', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--filter-ignorecase', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--dry-run', '--include-filelist', '--asynchronous-upload', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--filter-globbing', '--copy-links', '--s3-kms-key-id', '--name', '--s3-use-multiprocessing']
list_current_files = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--gpg-options', '--num-retries', '--s3-kms-grant', '--mp-segment-size', '--timeout', '--name', '--s3-use-glacier', '--ignore-errors', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--copy-links', '--s3-kms-key-id', '--exclude-if-present', '--s3-use-multiprocessing']
remove_all_but_n_full = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--gpg-options', '--num-retries', '--s3-kms-grant', '--mp-segment-size', '--timeout', '--name', '--s3-use-glacier', '--ignore-errors', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--copy-links', '--s3-kms-key-id', '--exclude-if-present', '--s3-use-multiprocessing']
remove_all_inc_of_but_n_full = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--gpg-options', '--num-retries', '--s3-kms-grant', '--mp-segment-size', '--timeout', '--name', '--s3-use-glacier', '--ignore-errors', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--copy-links', '--s3-kms-key-id', '--exclude-if-present', '--s3-use-multiprocessing']
remove_older_than = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--gpg-options', '--num-retries', '--s3-kms-grant', '--mp-segment-size', '--timeout', '--name', '--s3-use-glacier', '--ignore-errors', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--copy-links', '--s3-kms-key-id', '--exclude-if-present', '--s3-use-multiprocessing']
restore = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--gpg-options', '--num-retries', '--s3-kms-grant', '--mp-segment-size', '--timeout', '--name', '--s3-use-glacier', '--ignore-errors', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--copy-links', '--s3-kms-key-id', '--exclude-if-present', '--s3-use-multiprocessing']
verify = ['--s3-european-buckets', '--webdav-headers', '--encrypt-key', '--s3-endpoint-url', '--use-agent', '--mf-purge', '--azure-max-block-size', '--null-separator', '--force', '--swift-storage-policy', '--file-prefix-archive', '--exclude-globbing-filelist', '--par2-volumes', '--s3-use-server-side-kms-encryption', '--max-blocksize', '--ssh-options', '--rename', '--numeric-owner', '--ssl-no-check-certificate', '--azure-max-connections', '--imap-mailbox', '--progress-rate', '--sign-key', '--s3-use-new-style', '--s3-use-rrs', '--exclude-older-than', '--s3-region-name', '--s3-use-onezone-ia', '--skip-volume', '--exclude-other-filesystems', '--par2-redundancy', '--do-not-restore-ownership', '--ssl-cacert-file', '--rsync-options', '--fail-on-volume', '--no-compression', '--compare-data', '--restore-time', '--exclude-filelist-stdin', '--par2-options', '--file-prefix-signature', '--ssh-askpass', '--s3-use-ia', '--include-globbing-filelist', '--no-encryption', '--progress', '--verbosity', '--s3-multipart-max-procs', '--encrypt-secret-keyring', '--b2-hide-files', '--idr-fakeroot', '--azure-blob-tier', '--exclude-device-files', '--no-files-changed', '--scp-command', '--s3-unencrypted-connection', '--pydevd', '--gpg-binary', '--path-to-restore', '--file-prefix-manifest', '--show-changes-in-set', '--ftp-regular', '--ftp-passive', '--config-dir', '--no-restore-ownership', '--s3-multipart-max-timeout', '--s3-use-server-side-encryption', '--gpg-options', '--num-retries', '--s3-kms-grant', '--mp-segment-size', '--timeout', '--name', '--s3-use-glacier', '--ignore-errors', '--s3-use-deep-archive', '--sftp-command', '--imap-full-address', '--current-time', '--s3-use-glacier-ir', '--full-if-older-than', '--archive-dir', '--hidden-encrypt-key', '--metadata-sync-mode', '--old-filenames', '--backend-retry-delay', '--azure-max-single-put-size', '--include-filelist-stdin', '--file-to-restore', '--ssl-cacert-path', '--short-filenames', '--gio', '--tempdir', '--s3-multipart-chunk-size', '--no-print-statistics', '--file-prefix', '--allow-source-mismatch', '--file-changed', '--cf-backend', '--copy-links', '--s3-kms-key-id', '--exclude-if-present', '--s3-use-multiprocessing']
class duplicity.cli_data.DuplicityCommands[source]

Bases: object

duplicity commands and positional args expected

NOTE: cli_util must contain a function named check_* for each positional arg,

for example check_source_path() to check for source path validity.

__init__() None
backup = ['source_path', 'target_url']
cleanup = ['target_url']
collection_status = ['target_url']
full = ['source_path', 'target_url']
incremental = ['source_path', 'target_url']
list_current_files = ['target_url']
remove_all_but_n_full = ['count', 'target_url']
remove_all_inc_of_but_n_full = ['count', 'target_url']
remove_older_than = ['remove_time', 'target_url']
restore = ['source_url', 'target_dir']
verify = ['source_url', 'target_dir']
class duplicity.cli_data.OptionAliases[source]

Bases: object

__init__() None
path_to_restore = ['-r']
restore_time = ['-t', '--time']
verbosity = ['-v']
version = ['-V']
class duplicity.cli_data.OptionKwargs[source]

Bases: object

Option kwargs for add_argument

__init__() None
allow_source_mismatch = {'action': 'store_true', 'default': False, 'help': 'Allow different source directories'}
archive_dir = {'default': '/home/docs/.cache/duplicity', 'help': 'Path to store metadata archives', 'metavar': 'path', 'type': <function check_file>}
asynchronous_upload = {'action': 'store_const', 'const': 1, 'default': 0, 'dest': 'async_concurrency', 'help': 'Number of async upload tasks, max of 1'}
azure_blob_tier = {'default': None, 'help': 'Standard storage tier used for storing backup files (Hot|Cool|Archive)', 'metavar': 'Hot|Cool|Archive'}
azure_max_block_size = {'default': None, 'help': 'Number for the block size to upload a blob if the length is unknown\nor is larger than the value set by --azure-max-single-put-size\nThe maximum block size the service supports is 100MiB.', 'metavar': 'number', 'type': <class 'int'>}
azure_max_connections = {'default': None, 'help': 'Number of maximum parallel connections to use when the blob size exceeds 64MB', 'metavar': 'number', 'type': <class 'int'>}
azure_max_single_put_size = {'default': None, 'help': 'Largest supported upload size where the Azure library makes only one put call.\nUsed to upload a single block if the content length is known and is less than this', 'metavar': 'number', 'type': <class 'int'>}
b2_hide_files = {'action': 'store_true', 'default': False, 'help': 'Whether the B2 backend hides files instead of deleting them'}
backend_retry_delay = {'default': 30, 'help': 'Delay time before next try after a failure of a backend operation', 'metavar': 'seconds', 'type': <class 'int'>}
cf_backend = {'default': 'pyrax', 'help': 'Allow the user to switch cloudfiles backend', 'metavar': 'pyrax|cloudfiles'}
compare_data = {'action': 'store_true', 'default': False, 'help': 'Compare data on verify not only signatures'}
config_dir = {'default': '/home/docs/.cache/duplicity', 'help': 'Path to store configuration files', 'metavar': 'path', 'type': <function check_file>}
current_time = {'help': '==SUPPRESS==', 'type': <class 'int'>}
do_not_restore_ownership = {'action': <class 'duplicity.cli_util.DeprecationAction'>, 'help': '==SUPPRESS==', 'nargs': 0}
dry_run = {'action': 'store_true', 'default': False, 'help': 'Perform dry-run with no writes'}
encrypt_key = {'default': None, 'help': 'GNUpg key for encryption/decryption', 'metavar': 'gpg-key-id', 'type': <function set_encrypt_key>}
encrypt_secret_keyring = {'default': None, 'help': 'Path to secret GNUpg keyring', 'metavar': 'path'}
exclude = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'Exclude globbing pattern', 'metavar': 'shell_pattern'}
exclude_device_files = {'action': 'store_true', 'default': False, 'help': 'Exclude device files'}
exclude_filelist = {'action': <class 'duplicity.cli_util.AddFilelistAction'>, 'default': None, 'help': 'File with list of file patters to exclude', 'metavar': 'filename'}
exclude_filelist_stdin = {'action': <class 'duplicity.cli_util.DeprecationAction'>, 'help': '==SUPPRESS==', 'nargs': 0}
exclude_globbing_filelist = {'action': <class 'duplicity.cli_util.DeprecationAction'>, 'help': '==SUPPRESS==', 'nargs': 0}
exclude_if_present = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'Exclude directory if this file is present', 'metavar': 'filename'}
exclude_older_than = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'Exclude files older than time', 'metavar': 'time'}
exclude_other_filesystems = {'action': 'store_true', 'default': False, 'help': 'Exclude other filesystems from backup'}
exclude_regexp = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'Exclude based on regex pattern', 'metavar': 'regex'}
fail_on_volume = {'help': '==SUPPRESS==', 'type': <class 'int'>}
file_changed = {'default': None, 'help': 'Whether to collect only the file status, not the whole root', 'metavar': 'path', 'type': <function check_file>}
file_prefix = {'default': b'', 'help': 'String prefix for all duplicity files', 'metavar': 'string', 'type': <function make_bytes>}
file_prefix_archive = {'default': b'', 'help': 'String prefix for duplicity difftar files', 'metavar': 'string', 'type': <function make_bytes>}
file_prefix_manifest = {'default': b'', 'help': 'String prefix for duplicity manifest files', 'metavar': 'string', 'type': <function make_bytes>}
file_prefix_signature = {'default': b'', 'help': 'String prefix for duplicity signature files', 'metavar': 'string', 'type': <function make_bytes>}
file_to_restore = {'action': <class 'duplicity.cli_util.DeprecationAction'>, 'help': '==SUPPRESS==', 'nargs': 0}
files_from = {'action': <class 'duplicity.cli_util.AddFilelistAction'>, 'default': None, 'help': 'Defines the backup source as a sub-set of the source folder', 'metavar': 'filename', 'type': <function check_file>}
filter_globbing = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'File selection mode switch, changes the interpretation of any subsequent\n--exclude* or --include* options to shell globbing.', 'nargs': 0}
filter_ignorecase = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'File selection mode switch, changes the interpretation of any subsequent\n--exclude* or --include* options to case-insensitive matching.', 'nargs': 0}
filter_literal = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'File selection mode switch, changes the interpretation of any subsequent\n--exclude* or --include* options to literal strings.', 'nargs': 0}
filter_regexp = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'File selection mode switch, changes the interpretation of any subsequent\n--exclude* or --include* options to regular expressions.', 'nargs': 0}
filter_strictcase = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'File selection mode switch, changes the interpretation of any subsequent\n--exclude* or --include* options to case-sensitive matching.', 'nargs': 0}
force = {'action': 'store_true', 'default': None, 'help': 'Force duplicity to actually delete during cleanup'}
ftp_passive = {'action': 'store_const', 'const': 'passive', 'default': 'passive', 'dest': 'ftp_connection', 'help': 'Tell FTP to use passive mode'}
ftp_regular = {'action': 'store_const', 'const': 'regular', 'default': 'passive', 'dest': 'ftp_connection', 'help': 'Tell FTP to use regular mode'}
full_if_older_than = {'default': None, 'help': "Perform full backup if last full is older than 'time'", 'metavar': 'time', 'type': <function check_time>}
gio = {'action': <class 'duplicity.cli_util.DeprecationAction'>, 'help': '==SUPPRESS==', 'nargs': 0}
gpg_binary = {'default': None, 'help': 'Path to GNUpg executable file', 'metavar': 'path', 'type': <function check_file>}
gpg_options = {'action': 'append', 'default': None, 'help': 'Options to append to GNUpg invocation', 'metavar': 'options'}
hidden_encrypt_key = {'default': None, 'help': 'Hidden GNUpg encryption key', 'metavar': 'gpg-key-id', 'type': <function set_hidden_encrypt_key>}
idr_fakeroot = {'default': None, 'help': 'Fake root for idrive backend', 'metavar': 'path', 'type': <function check_file>}
ignore_errors = {'action': 'store_true', 'default': False, 'help': 'Ignore most errors during processing'}
imap_full_address = {'action': 'store_true', 'default': False, 'help': 'Whether to use the full email address as the user name'}
imap_mailbox = {'default': 'INBOX', 'help': 'Name of the imap folder to store backups', 'metavar': 'imap_mailbox'}
include = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'Include globbing pattern', 'metavar': 'shell_pattern'}
include_filelist = {'action': <class 'duplicity.cli_util.AddFilelistAction'>, 'default': None, 'help': 'File with list of file patters to include', 'metavar': 'filename'}
include_filelist_stdin = {'action': <class 'duplicity.cli_util.DeprecationAction'>, 'help': '==SUPPRESS==', 'nargs': 0}
include_globbing_filelist = {'action': <class 'duplicity.cli_util.DeprecationAction'>, 'help': '==SUPPRESS==', 'nargs': 0}
include_regexp = {'action': <class 'duplicity.cli_util.AddSelectionAction'>, 'default': None, 'help': 'Include based on regex pattern', 'metavar': 'regex'}
log_fd = {'default': None, 'help': 'Logging file descripto to use', 'metavar': 'file_descriptor', 'type': <function set_log_fd>}
log_file = {'default': None, 'help': 'Logging filename to use', 'metavar': 'log_filename', 'type': <function set_log_file>}
log_timestamp = {'action': 'store_true', 'default': False, 'help': 'Whether to include timestamp and level in log'}
max_blocksize = {'default': 2048, 'help': 'Maximum block size for large files in MB', 'metavar': 'number', 'type': <class 'int'>}
metadata_sync_mode = {'choices': ('full', 'partial'), 'default': 'partial', 'help': 'Only sync required metadata not all'}
mf_purge = {'action': 'store_true', 'default': False, 'help': 'Option for mediafire to purge files on delete instead of sending to trash'}
mp_segment_size = {'default': 230686720, 'help': 'Swift backend segment size', 'metavar': 'number', 'type': <function set_megs>}
name = {'default': None, 'dest': 'backup_name', 'help': 'Custom backup name instead of hash', 'metavar': 'backup name'}
no_compression = {'action': 'store_false', 'default': True, 'dest': 'compression', 'help': 'If supplied do not perform compression'}
no_encryption = {'action': 'store_false', 'default': True, 'dest': 'encryption', 'help': 'If supplied do not perform encryption'}
no_files_changed = {'action': 'store_false', 'default': True, 'dest': 'files_changed', 'help': 'If supplied do not collect the files_changed list'}
no_print_statistics = {'action': 'store_false', 'default': True, 'dest': 'print_statistics', 'help': 'If supplied do not print statistics'}
no_restore_ownership = {'action': 'store_false', 'default': True, 'dest': 'restore_ownership', 'help': 'If supplied do not restore uid/gid when finished'}
null_separator = {'action': 'store_true', 'default': None, 'help': 'Whether to split on null instead of newline'}
num_retries = {'default': 5, 'help': 'Number of retries on network operations', 'metavar': 'number', 'type': <class 'int'>}
numeric_owner = {'action': 'store_true', 'default': False, 'help': 'Keeps number from tar file. Like same option in GNU tar.'}
old_filenames = {'action': <class 'duplicity.cli_util.DeprecationAction'>, 'help': '==SUPPRESS==', 'nargs': 0}
par2_options = {'action': 'append', 'default': '', 'help': 'Verbatim par2 options.  May be supplied multiple times.', 'metavar': 'options'}
par2_redundancy = {'default': 10, 'help': 'Level of Redundancy in percent for Par2 files', 'metavar': 'number', 'type': <class 'int'>}
par2_volumes = {'default': 1, 'help': 'Number of par2 volumes', 'metavar': 'number', 'type': <class 'int'>}
path_to_restore = {'default': None, 'dest': 'restore_path', 'help': 'File or directory path to restore', 'metavar': 'path', 'type': <function check_file>}
progress = {'action': 'store_true', 'default': False, 'help': 'Display progress for the full and incremental backup operations'}
progress_rate = {'default': 3, 'help': 'Used to control the progress option update rate in seconds', 'metavar': 'number', 'type': <class 'int'>}
pydevd = {'action': 'store_true', 'help': '==SUPPRESS=='}
rename = {'action': <class 'duplicity.cli_util.AddRenameAction'>, 'default': None, 'help': 'Rename files during restore', 'metavar': 'from to', 'nargs': 2}
restore_time = {'default': None, 'help': 'Restores will try to bring back the state as of the following time', 'metavar': 'time', 'type': <function check_time>}
rsync_options = {'action': 'append', 'default': '', 'help': 'User added rsync options', 'metavar': 'options'}
s3_endpoint_url = {'action': 'store', 'default': None, 'help': 'Specity S3 endpoint', 'metavar': 's3_endpoint_url'}
s3_european_buckets = {'action': 'store_true', 'default': False, 'help': 'Whether to create European buckets'}
s3_kms_grant = {'action': 'store', 'default': None, 'help': 'S3 KMS grant value', 'metavar': 's3_kms_grant'}
s3_kms_key_id = {'action': 'store', 'default': None, 'help': 'S3 KMS encryption key id', 'metavar': 's3_kms_key_id'}
s3_multipart_chunk_size = {'default': 20, 'help': 'Chunk size used for S3 multipart uploads.The number of parallel uploads to\nS3 be given by chunk size / volume size. Use this to maximize the use of\nyour bandwidth', 'metavar': 'number', 'type': <function set_megs>}
s3_multipart_max_procs = {'default': 4, 'help': 'Number of processes to set the Processor Pool to when uploading multipart\nuploads to S3. Use this to control the maximum simultaneous uploads to S3', 'metavar': 'number', 'type': <class 'int'>}
s3_multipart_max_timeout = {'default': None, 'help': 'Number of seconds to wait for each part of a multipart upload to S3. Use this\nto prevent hangups when doing a multipart upload to S3', 'metavar': 'number', 'type': <class 'int'>}
s3_region_name = {'action': 'store', 'default': None, 'help': 'Specity S3 region name', 'metavar': 's3_region_name'}
s3_unencrypted_connection = {'action': 'store_true', 'default': False, 'help': 'Whether to use plain HTTP (without SSL) to send data to S3'}
s3_use_deep_archive = {'action': 'store_true', 'default': False, 'help': 'Whether to use S3 Glacier Deep Archive Storage'}
s3_use_glacier = {'action': 'store_true', 'default': False, 'help': 'Whether to use S3 Glacier Storage'}
s3_use_glacier_ir = {'action': 'store_true', 'default': False, 'help': 'Whether to use S3 Glacier IR Storage'}
s3_use_ia = {'action': 'store_true', 'default': False, 'help': 'Whether to use S3 Infrequent Access Storage'}
s3_use_multiprocessing = {'action': 'store_true', 'default': False, 'help': 'Option to allow the s3/boto backend use the multiprocessing version'}
s3_use_new_style = {'action': 'store_true', 'default': False, 'help': 'Whether to use new-style subdomain addressing for S3 buckets. Such\nuse is not backwards-compatible with upper-case buckets, or buckets\nthat are otherwise not expressable in a valid hostname'}
s3_use_onezone_ia = {'action': 'store_true', 'default': False, 'help': 'Whether to use S3 One Zone Infrequent Access Storage'}
s3_use_rrs = {'action': 'store_true', 'default': False, 'help': 'Whether to use S3 Reduced Redundancy Storage'}
s3_use_server_side_encryption = {'action': 'store_true', 'default': False, 'dest': 's3_use_sse', 'help': 'Option to allow use of server side encryption in s3'}
s3_use_server_side_kms_encryption = {'action': 'store_true', 'default': False, 'dest': 's3_use_sse_kms', 'help': 'Allow use of server side KMS encryption'}
scp_command = {'default': None, 'help': 'SCP command to use (ssh pexpect backend)', 'metavar': 'command'}
sftp_command = {'default': None, 'help': 'SFTP command to use (ssh pexpect backend)', 'metavar': 'command'}
short_filenames = {'action': <class 'duplicity.cli_util.DeprecationAction'>, 'help': '==SUPPRESS==', 'nargs': 0}
show_changes_in_set = {'default': None, 'help': 'Show file changes (new, deleted, changed) in the specified backup\nset (0 specifies latest, 1 specifies next latest, etc.)', 'metavar': 'number', 'type': <class 'int'>}
sign_key = {'default': None, 'help': 'Sign key for encryption/decryption', 'metavar': 'gpg-key-id', 'type': <function set_sign_key>}
skip_volume = {'help': '==SUPPRESS==', 'type': <class 'int'>}
ssh_askpass = {'action': 'store_true', 'default': False, 'help': 'Ask the user for the SSH password. Not for batch usage'}
ssh_options = {'action': 'append', 'default': '', 'help': 'SSH options to add', 'metavar': 'options'}
ssl_cacert_file = {'default': None, 'help': 'pem formatted bundle of certificate authorities', 'metavar': 'file'}
ssl_cacert_path = {'default': None, 'help': 'path to a folder with certificate authority files', 'metavar': 'path'}
ssl_no_check_certificate = {'action': 'store_true', 'default': False, 'help': 'Set to not validate SSL certificates'}
swift_storage_policy = {'default': '', 'help': 'Option to specify a Swift container storage policy.', 'metavar': 'policy'}
tempdir = {'default': None, 'dest': 'temproot', 'help': 'Working directory for temp files', 'metavar': 'path', 'type': <function check_file>}
time_separator = {'default': ':', 'help': "Character used like the ':' in time strings like\n2002-08-06T04:22:00-07:00", 'metavar': 'char'}
timeout = {'default': 30, 'help': 'Network timeout in seconds', 'metavar': 'seconds', 'type': <class 'int'>}
use_agent = {'action': 'store_true', 'default': False, 'help': 'Whether to specify --use-agent in GnuPG options'}
verbosity = {'default': 3, 'help': 'Logging verbosity', 'metavar': '[0-9]', 'type': <function check_verbosity>}
version = {'action': 'version', 'help': 'Display version and exit', 'version': '%(prog)s $version'}
volsize = {'default': 200, 'help': 'Volume size to use in MiB', 'metavar': 'number', 'type': <function set_megs>}
webdav_headers = {'default': '', 'help': "extra headers for Webdav, like 'Cookie,name: value'", 'metavar': 'string'}
duplicity.cli_main module

Main for parse command line, check for consistency, and set config

class duplicity.cli_main.DuplicityHelpFormatter(prog, indent_increment=2, max_help_position=24, width=None)[source]

Bases: ArgumentDefaultsHelpFormatter, RawDescriptionHelpFormatter

A working class to combine ArgumentDefaults, RawDescription. Use with make_wide() to insure we catch argparse API changes.

duplicity.cli_main.make_wide(formatter, w=120, h=46)[source]

Return a wider HelpFormatter, if possible. See: https://stackoverflow.com/a/5464440 Beware: “Only the name of this class is considered a public API.”

duplicity.cli_main.parse_cmdline_options(arglist)[source]

Parse argument list

duplicity.cli_main.process_command_line(cmdline_list)[source]

Process command line, set config

duplicity.cli_util module

Utils for parse command line, check for consistency, and set config

class duplicity.cli_util.AddFilelistAction(option_strings, dest, **kwargs)[source]

Bases: DuplicityAction

__call__(parser, namespace, values, option_string=None)[source]

Call self as a function.

__init__(option_strings, dest, **kwargs)[source]
class duplicity.cli_util.AddRenameAction(option_strings, dest, **kwargs)[source]

Bases: DuplicityAction

__call__(parser, namespace, values, option_string=None)[source]

Call self as a function.

__init__(option_strings, dest, **kwargs)[source]
class duplicity.cli_util.AddSelectionAction(option_strings, dest, **kwargs)[source]

Bases: DuplicityAction

__call__(parser, namespace, values, option_string=None)[source]

Call self as a function.

__init__(option_strings, dest, **kwargs)[source]
exception duplicity.cli_util.CommandLineError[source]

Bases: UserError

class duplicity.cli_util.DeprecationAction(option_strings, dest, **kwargs)[source]

Bases: DuplicityAction

__call__(parser, namespace, values, option_string=None)[source]

Call self as a function.

__init__(option_strings, dest, **kwargs)[source]
class duplicity.cli_util.DuplicityAction(option_strings, dest, **kwargs)[source]

Bases: Action

__call__(parser, namespace, values, option_string=None)[source]

Call self as a function.

__init__(option_strings, dest, **kwargs)[source]
duplicity.cli_util.check_count(val)[source]
duplicity.cli_util.check_file(value)[source]
duplicity.cli_util.check_remove_time(val)[source]
duplicity.cli_util.check_source_path(val)[source]
duplicity.cli_util.check_source_url(val)[source]
duplicity.cli_util.check_target_dir(val)[source]
duplicity.cli_util.check_target_url(val)[source]
duplicity.cli_util.check_time(value)[source]
duplicity.cli_util.check_verbosity(value)[source]
duplicity.cli_util.cmd2var(s)[source]

Convert ccommand string to var name

duplicity.cli_util.command_line_error(message)[source]

Indicate a command line error and exit

duplicity.cli_util.dflt(val)[source]

Return printable value for default.

duplicity.cli_util.expand_archive_dir(archdir, backname)[source]

Return expanded version of archdir joined with backname.

duplicity.cli_util.expand_fn(filename)[source]
duplicity.cli_util.generate_default_backup_name(backend_url)[source]

@param backend_url: URL to backend. @returns A default backup name (string).

duplicity.cli_util.make_bytes(value)[source]
duplicity.cli_util.opt2var(s)[source]

Convert option string to var name

duplicity.cli_util.set_archive_dir(dirstring)[source]

Check archive dir and set global

duplicity.cli_util.set_encrypt_key(encrypt_key)[source]

Set config.gpg_profile.encrypt_key assuming proper key given

duplicity.cli_util.set_hidden_encrypt_key(hidden_encrypt_key)[source]

Set config.gpg_profile.hidden_encrypt_key assuming proper key given

duplicity.cli_util.set_log_fd(fd)[source]
duplicity.cli_util.set_log_file(fn)[source]
duplicity.cli_util.set_megs(num)[source]
duplicity.cli_util.set_selection()[source]

Return selection iter starting at filename with arguments applied

duplicity.cli_util.set_sign_key(sign_key)[source]

Set config.gpg_profile.sign_key assuming proper key given

duplicity.cli_util.var2cmd(s)[source]

Convert var name to command string

duplicity.cli_util.var2opt(s)[source]

Convert var name to option string

duplicity.config module

Store global configuration information

duplicity.diffdir module

Functions for producing signatures and deltas of directories

Note that the main processes of this module have two parts. In the first, the signature or delta is constructed of a ROPath iterator. In the second, the ROPath iterator is put into tar block form.

class duplicity.diffdir.DeltaTarBlockIter(input_iter)[source]

Bases: TarBlockIter

TarBlockIter that yields parts of a deltatar file

Unlike SigTarBlockIter, the argument to __init__ is a delta_path_iter, so the delta information has already been calculated.

get_data_block(fp)[source]

Return pair (next data block, boolean last data block)

process(delta_ropath)[source]

Get a tarblock from delta_ropath

process_continued()[source]

Return next volume in multivol diff or snapshot

exception duplicity.diffdir.DiffDirException[source]

Bases: Exception

duplicity.diffdir.DirDelta(path_iter, dirsig_fileobj_list)[source]

Produce tarblock diff given dirsig_fileobj_list and pathiter

dirsig_fileobj_list should either be a tar fileobj or a list of those, sorted so the most recent is last.

duplicity.diffdir.DirDelta_WriteSig(path_iter, sig_infp_list, newsig_outfp)[source]

Like DirDelta but also write signature into sig_fileobj

Like DirDelta, sig_infp_list can be a tar fileobj or a sorted list of those. A signature will only be written to newsig_outfp if it is different from (the combined) sig_infp_list.

duplicity.diffdir.DirFull(path_iter)[source]

Return a tarblock full backup of items in path_iter

A full backup is just a diff starting from nothing (it may be less elegant than using a standard tar file, but we can be sure that it will be easy to split up the tar and make the volumes the same sizes).

duplicity.diffdir.DirFull_WriteSig(path_iter, sig_outfp)[source]

Return full backup like above, but also write signature to sig_outfp

duplicity.diffdir.DirSig(path_iter)[source]

Alias for SigTarBlockIter below

class duplicity.diffdir.DummyBlockIter(input_iter)[source]

Bases: TarBlockIter

TarBlockIter that does no file reading

process(delta_ropath)[source]

Get a fake tarblock from delta_ropath

class duplicity.diffdir.FileWithReadCounter(infile)[source]

Bases: object

File-like object which also computes amount read as it is read

__init__(infile)[source]

FileWithReadCounter initializer

close()[source]
read(length=-1)[source]
class duplicity.diffdir.FileWithSignature(infile, callback, filelen, *extra_args)[source]

Bases: object

File-like object which also computes signature as it is read

__init__(infile, callback, filelen, *extra_args)[source]

FileTee initializer

The object will act like infile, but whenever it is read it add infile’s data to a SigGenerator object. When the file has been read to the end the callback will be called with the calculated signature, and any extra_args if given.

filelen is used to calculate the block size of the signature.

blocksize = 32768
close()[source]
read(length=-1)[source]
class duplicity.diffdir.SigTarBlockIter(input_iter)[source]

Bases: TarBlockIter

TarBlockIter that yields blocks of a signature tar from path_iter

process(path)[source]

Return associated signature TarBlock from path

class duplicity.diffdir.TarBlock(index, data)[source]

Bases: object

Contain information to add next file to tar

__init__(index, data)[source]

TarBlock initializer - just store data

class duplicity.diffdir.TarBlockIter(input_iter)[source]

Bases: object

A bit like an iterator, yield tar blocks given input iterator

Unlike an iterator, however, control over the maximum size of a tarblock is available by passing an argument to next(). Also the get_footer() is available.

__init__(input_iter)[source]

TarBlockIter initializer

__next__()[source]

Return next block and update offset

Return closing string for tarfile, reset offset

get_previous_index()[source]

Return index of last tarblock, or None if no previous index

get_read_size()[source]
process(val)[source]

Turn next value of input_iter into a TarBlock

process_continued()[source]

Get more tarblocks

If processing val above would produce more than one TarBlock, get the rest of them by calling process_continue.

queue_index_data(data)[source]

Next time next() is called, we will return data instead of processing

recall_index()[source]

Retrieve index remembered with remember_next_index

remember_next_index()[source]

When called, remember the index of the next block iterated

tarinfo2tarblock(index, tarinfo, file_data=b'')[source]

Make tarblock out of tarinfo and file data

duplicity.diffdir.collate2iters(riter1, riter2)[source]

Collate two iterators.

The elements yielded by each iterator must be have an index variable, and this function returns pairs (elem1, elem2), (elem1, None), or (None, elem2) two elements in a pair will have the same index, and earlier indicies are yielded later than later indicies.

duplicity.diffdir.combine_path_iters(path_iter_list)[source]

Produce new iterator by combining the iterators in path_iter_list

This new iter will iterate every path that is in path_iter_list in order of increasing index. If multiple iterators in path_iter_list yield paths with the same index, combine_path_iters will discard all paths but the one yielded by the last path_iter.

This is used to combine signature iters, as the output will be a full up-to-date signature iter.

duplicity.diffdir.delta_iter_error_handler(exc, new_path, sig_path, sig_tar=None)[source]

Called by get_delta_iter, report error in getting delta

duplicity.diffdir.get_block_size(file_len)[source]

Return a reasonable block size to use on files of length file_len

If the block size is too big, deltas will be bigger than is necessary. If the block size is too small, making deltas and patching can take a really long time.

duplicity.diffdir.get_combined_path_iter(sig_infp_list)[source]

Return path iter combining signatures in list of open sig files

duplicity.diffdir.get_delta_iter(new_iter, sig_iter, sig_fileobj=None)[source]

Generate delta iter from new Path iter and sig Path iter.

For each delta path of regular file type, path.difftype with be set to “snapshot”, “diff”. sig_iter will probably iterate ROPaths instead of Paths.

If sig_fileobj is not None, will also write signatures to sig_fileobj.

duplicity.diffdir.get_delta_path(new_path, sig_path, sigTarFile=None)[source]

Return new delta_path which, when read, writes sig to sig_fileobj, if sigTarFile is not None

duplicity.diffdir.log_delta_path(delta_path, new_path=None, stats=None)[source]

Look at delta path and log delta. Add stats if new_path is set

duplicity.diffdir.sigtar2path_iter(sigtarobj)[source]

Convert signature tar file object open for reading into path iter

duplicity.diffdir.write_block_iter(block_iter, out_obj)[source]

Write block_iter to filename, path, or file object

duplicity.dup_collections module

Classes and functions on collections of backup volumes

class duplicity.dup_collections.BackupChain(backend)[source]

Bases: object

BackupChain - a number of linked BackupSets

A BackupChain always starts with a full backup set and continues with incremental ones.

__init__(backend)[source]

Initialize new chain, only backend is required at first

add_inc(incset)[source]

Add incset to self. Return False if incset does not match

delete(keep_full=False)[source]

Delete all sets in chain, in reverse order

get_all_sets()[source]

Return list of all backup sets in chain

get_first()[source]

Return first BackupSet in chain (ie the full backup)

get_last()[source]

Return last BackupSet in chain

get_num_volumes()[source]

Return the total number of volumes in the chain

get_sets_at_time(time)[source]

Return a list of sets in chain earlier or equal to time

set_full(fullset)[source]

Add full backup set

short_desc()[source]

Return a short one-line description of the chain, suitable for log messages.

to_log_info(prefix='')[source]

Return summary, suitable for printing to log

class duplicity.dup_collections.BackupSet(backend, action)[source]

Bases: object

Backup set - the backup information produced by one session

__init__(backend, action)[source]

Initialize new backup set, only backend is required at first

add_filename(filename, pr=None)[source]

Add a filename to given set. Return true if it fits.

The filename will match the given set if it has the right times and is of the right type. The information will be set from the first filename given.

@param filename: name of file to add @type filename: string

@param pr: pre-computed result of file_naming.parse(filename) @type pr: Optional[ParseResults]

check_manifests(check_remote=True)[source]

Make sure remote manifest is equal to local one

delete()[source]

Remove all files in set, both local and remote

get_filenames()[source]

Return sorted list of (remote) filenames of files in set

get_files_changed()[source]
get_local_manifest()[source]

Return manifest object by reading local manifest file

get_manifest()[source]

Return manifest object, showing preference for local copy

get_remote_manifest()[source]

Return manifest by reading remote manifest on backend

get_time()[source]

Return time if full backup, or end_time if incremental

get_timestr()[source]

Return time string suitable for log statements

is_complete()[source]

Assume complete if found manifest file

set_files_changed()[source]
set_info(pr)[source]

Set BackupSet information from ParseResults object

@param pr: parse results @type pr: ParseResults

set_manifest(remote_filename)[source]

Add local and remote manifest filenames to backup set

class duplicity.dup_collections.BackupSetChangesStatus(backup_set)[source]

Bases: object

__init__(backup_set)[source]
exception duplicity.dup_collections.CollectionsError[source]

Bases: Exception

class duplicity.dup_collections.CollectionsStatus(backend, archive_dir_path, action)[source]

Bases: object

Hold information about available chains and sets

__init__(backend, archive_dir_path, action)[source]

Make new object. Does not set values

get_all_file_changed_records(set_index)[source]

Returns file changes in the specific backup set

get_backup_chain_at_time(time)[source]

Return backup chain covering specified time

Tries to find the backup chain covering the given time. If there is none, return the earliest chain before, and failing that, the earliest chain.

get_backup_chains(filename_list)[source]

Split given filename_list into chains

Return value will be tuple (list of chains, list of sets, list of incomplete sets), where the list of sets will comprise sets not fitting into any chain, and the incomplete sets are sets missing files.

get_chains_older_than(t)[source]

Returns a list of backup chains older than the given time t

All of the times will be associated with an intact chain. Furthermore, none of the times will be of a chain which a newer set may depend on. For instance, if set A is a full set older than t, and set B is an incremental based on A which is newer than t, then the time of set A will not be returned.

get_extraneous()[source]

Return list of the names of extraneous duplicity files

A duplicity file is considered extraneous if it is recognizable as a duplicity file, but isn’t part of some complete backup set, or current signature chain.

get_file_changed_record(filepath)[source]

Returns time line of specified file changed

get_last_backup_chain()[source]

Return the last full backup of the collection, or None if there is no full backup chain.

get_last_full_backup_time()[source]

Return the time of the last full backup, or 0 if there is none.

get_nth_last_backup_chain(n)[source]

Return the nth-to-last full backup of the collection, or None if there is less than n backup chains.

NOTE: n = 1 -> time of latest available chain (n = 0 is not a valid input). Thus the second-to-last is obtained with n=2 rather than n=1.

get_nth_last_full_backup_time(n)[source]

Return the time of the nth to last full backup, or 0 if there is none.

get_older_than(t)[source]

Returns a list of backup sets older than the given time t

All of the times will be associated with an intact chain. Furthermore, none of the times will be of a set which a newer set may depend on. For instance, if set A is a full set older than t, and set B is an incremental based on A which is newer than t, then the time of set A will not be returned.

get_older_than_required(t)[source]

Returns list of old backup sets required by new sets

This function is similar to the previous one, but it only returns the times of sets which are old but part of the chains where the newer end of the chain is newer than t.

get_signature_chain_at_time(time)[source]

Return signature chain covering specified time

Tries to find the signature chain covering the given time. If there is none, return the earliest chain before, and failing that, the earliest chain.

get_signature_chains(local, filelist=None)[source]

Find chains in archive_dir_path (if local is true) or backend

Use filelist if given, otherwise regenerate. Return value is pair (list of chains, list of signature paths not in any chains).

get_signature_chains_older_than(t)[source]

Returns a list of signature chains older than the given time t

All of the times will be associated with an intact chain. Furthermore, none of the times will be of a chain which a newer set may depend on. For instance, if set A is a full set older than t, and set B is an incremental based on A which is newer than t, then the time of set A will not be returned.

get_sorted_chains(chain_list)[source]

Return chains sorted by end_time. If tie, local goes last

get_sorted_sets(set_list)[source]

Sort set list by end time, return (sorted list, incomplete)

set_matched_chain_pair(sig_chains, backup_chains)[source]

Set self.matched_chain_pair and self.other_sig/backup_chains

The latest matched_chain_pair will be set. If there are both remote and local signature chains capable of matching the latest backup chain, use the local sig chain (it does not need to be downloaded).

set_values(sig_chain_warning=1)[source]

Set values from archive_dir_path and backend.

Returns self for convenience. If sig_chain_warning is set to None, do not warn about unnecessary sig chains. This is because there may naturally be some unecessary ones after a full backup.

sort_sets(setlist)[source]

Return new list containing same elems of setlist, sorted by time

to_log_info()[source]

Return summary of the collection, suitable for printing to log

warn(sig_chain_warning)[source]

Log various error messages if find incomplete/orphaned files

class duplicity.dup_collections.FileChangedStatus(filepath, fileinfo_list)[source]

Bases: object

__init__(filepath, fileinfo_list)[source]
class duplicity.dup_collections.SignatureChain(local, location)[source]

Bases: object

A number of linked SignatureSets

Analog to BackupChain - start with a full-sig, and continue with new-sigs.

__init__(local, location)[source]

Return new SignatureChain.

local should be true iff the signature chain resides in config.archive_dir_path and false if the chain is in config.backend.

@param local: True if sig chain in config.archive_dir_path @type local: Boolean

@param location: Where the sig chain is located @type location: config.archive_dir_path or config.backend

add_filename(filename, pr=None)[source]

Add new sig filename to current chain. Return true if fits

check_times(time_list)[source]

Check to make sure times are in whole seconds

delete(keep_full=False)[source]

Remove all files in signature set

get_filenames(time=None)[source]

Return ordered list of filenames in set, up to a provided time

get_fileobjs(time=None)[source]

Return ordered list of signature fileobjs opened for reading, optionally at a certain time

islocal()[source]

Return true if represents a signature chain in archive_dir_path

duplicity.dup_main module
class duplicity.dup_main.Restart(last_backup)[source]

Bases: object

Class to aid in restart of inc or full backup. Instance in config.restart if restart in progress.

__init__(last_backup)[source]
checkManifest(mf)[source]
setLastSaved(mf)[source]
setParms(last_backup)[source]
duplicity.dup_main.check_last_manifest(col_stats)[source]

Check consistency and hostname/directory of last manifest

@type col_stats: CollectionStatus object @param col_stats: collection status

@rtype: void @return: void

duplicity.dup_main.check_resources(action)[source]

Check for sufficient resources: - temp space for volume build - enough max open files Put out fatal error if not sufficient to run

@type action: string @param action: action in progress

@rtype: void @return: void

duplicity.dup_main.check_sig_chain(col_stats)[source]

Get last signature chain for inc backup, or None if none available

@type col_stats: CollectionStatus object @param col_stats: collection status

duplicity.dup_main.cleanup(col_stats)[source]

Delete the extraneous files in the current backend

@type col_stats: CollectionStatus object @param col_stats: collection status

@rtype: void @return: void

duplicity.dup_main.do_backup(action)[source]
duplicity.dup_main.dummy_backup(tarblock_iter)[source]

Fake writing to backend, but do go through all the source paths.

@type tarblock_iter: tarblock_iter @param tarblock_iter: iterator for current tar block

@rtype: int @return: constant 0 (zero)

duplicity.dup_main.full_backup(col_stats)[source]

Do full backup of directory to backend, using archive_dir_path

@type col_stats: CollectionStatus object @param col_stats: collection status

@rtype: void @return: void

duplicity.dup_main.get_man_fileobj(backup_type)[source]

Return a fileobj opened for writing, save results as manifest

Save manifest in config.archive_dir_path gzipped. Save them on the backend encrypted as needed.

@type man_type: string @param man_type: either “full” or “new”

@rtype: fileobj @return: fileobj opened for writing

duplicity.dup_main.get_passphrase(n, action, for_signing=False)[source]

Check to make sure passphrase is indeed needed, then get the passphrase from environment, from gpg-agent, or user

If n=3, a password is requested and verified. If n=2, the current password is verified. If n=1, a password is requested without verification for the time being.

@type n: int @param n: verification level for a passphrase being requested @type action: string @param action: action to perform @type for_signing: boolean @param for_signing: true if the passphrase is for a signing key, false if not @rtype: string @return: passphrase

duplicity.dup_main.get_sig_fileobj(sig_type)[source]

Return a fileobj opened for writing, save results as signature

Save signatures in config.archive_dir gzipped. Save them on the backend encrypted as needed.

@type sig_type: string @param sig_type: either “full-sig” or “new-sig”

@rtype: fileobj @return: fileobj opened for writing

duplicity.dup_main.getpass_safe(message)[source]
duplicity.dup_main.incremental_backup(sig_chain)[source]

Do incremental backup of directory to backend, using archive_dir_path

@rtype: void @return: void

duplicity.dup_main.list_current(col_stats)[source]

List the files current in the archive (examining signature only)

@type col_stats: CollectionStatus object @param col_stats: collection status

@rtype: void @return: void

duplicity.dup_main.log_startup_parms(verbosity=5)[source]

log Python, duplicity, and system versions

duplicity.dup_main.main()[source]

Start/end here

duplicity.dup_main.print_statistics(stats, bytes_written)[source]

If config.print_statistics, print stats after adding bytes_written

@rtype: void @return: void

duplicity.dup_main.remove_all_but_n_full(col_stats)[source]

Remove backup files older than the last n full backups.

@type col_stats: CollectionStatus object @param col_stats: collection status

@rtype: void @return: void

duplicity.dup_main.remove_old(col_stats)[source]

Remove backup files older than config.remove_time from backend

@type col_stats: CollectionStatus object @param col_stats: collection status

@rtype: void @return: void

duplicity.dup_main.restart_position_iterator(tarblock_iter)[source]

Fake writing to backend, but do go through all the source paths. Stop when we have processed the last file and block from the last backup. Normal backup will proceed at the start of the next volume in the set.

@type tarblock_iter: tarblock_iter @param tarblock_iter: iterator for current tar block

@rtype: int @return: constant 0 (zero)

duplicity.dup_main.restore(col_stats)[source]

Restore archive in config.backend to config.local_path

@type col_stats: CollectionStatus object @param col_stats: collection status

@rtype: void @return: void

duplicity.dup_main.restore_add_sig_check(fileobj)[source]

Require signature when closing fileobj matches sig in gpg_profile

@rtype: void @return: void

duplicity.dup_main.restore_check_hash(volume_info, vol_path)[source]

Check the hash of vol_path path against data in volume_info

@rtype: boolean @return: true (verified) / false (failed)

duplicity.dup_main.restore_get_enc_fileobj(backend, filename, volume_info)[source]

Return plaintext fileobj from encrypted filename on backend

If volume_info is set, the hash of the file will be checked, assuming some hash is available. Also, if config.sign_key is set, a fatal error will be raised if file not signed by sign_key.

with –ignore-errors set continue on hash mismatch

duplicity.dup_main.restore_get_patched_rop_iter(col_stats)[source]

Return iterator of patched ROPaths of desired restore data

@type col_stats: CollectionStatus object @param col_stats: collection status

duplicity.dup_main.sync_archive(col_stats)[source]

Synchronize local archive manifest file and sig chains to remote archives. Copy missing files from remote to local as needed to make sure the local archive is synchronized to remote storage.

@rtype: void @return: void

duplicity.dup_main.verify(col_stats)[source]

Verify files, logging differences

@type col_stats: CollectionStatus object @param col_stats: collection status

@rtype: void @return: void

duplicity.dup_main.write_multivol(backup_type, tarblock_iter, man_outfp, sig_outfp, backend)[source]

Encrypt volumes of tarblock_iter and write to backend

backup_type should be “inc” or “full” and only matters here when picking the filenames. The path_prefix will determine the names of the files written to backend. Also writes manifest file. Returns number of bytes written.

@type backup_type: string @param backup_type: type of backup to perform, either ‘inc’ or ‘full’ @type tarblock_iter: tarblock_iter @param tarblock_iter: iterator for current tar block @type backend: callable backend object @param backend: I/O backend for selected protocol

@rtype: int @return: bytes written

duplicity.dup_temp module

Manage temporary files

class duplicity.dup_temp.Block(data)[source]

Bases: object

Data block to return from SrcIter

__init__(data)[source]
class duplicity.dup_temp.FileobjHooked(fileobj, tdp=None, dirpath=None, partname=None, permname=None, remname=None)[source]

Bases: object

Simulate a file, but add hook on close

__init__(fileobj, tdp=None, dirpath=None, partname=None, permname=None, remname=None)[source]

Initializer. fileobj is the file object to simulate

addhook(hook)[source]

Add hook (function taking no arguments) to run upon closing

close()[source]

Close fileobj, running hooks right afterwards

flush()[source]

Flush fileobj and force sync.

get_name()[source]

Return the name of the file

property name

Return the name of the file

read(length=-1)[source]

Read fileobj, return result of read()

seek(offset)[source]

Seeks to a location of fileobj

tell()[source]

Returns current location of fileobj

to_final()[source]

We are finished, rename to final, gzip if needed.

to_partial()[source]

We have achieved the first checkpoint, make file visible and permanent.

to_remote()[source]

We have written the last checkpoint, now encrypt or compress and send a copy of it to the remote for final storage.

write(buf)[source]

Write fileobj, return result of write()

class duplicity.dup_temp.SrcIter(src)[source]

Bases: object

Iterate over source and return Block of data.

__init__(src)[source]
__next__()[source]
get_read_size()[source]
class duplicity.dup_temp.TempDupPath(base, index=(), parseresults=None)[source]

Bases: DupPath

Like TempPath, but build around DupPath

delete()[source]

Forget and delete

filtered_open_with_delete(mode)[source]

Returns a filtered fileobj. When that is closed, delete file

open_with_delete(mode='rb')[source]

Returns a fileobj. When that is closed, delete file

class duplicity.dup_temp.TempPath(base, index=())[source]

Bases: Path

Path object used as a temporary file

delete()[source]

Forget and delete

open_with_delete(mode)[source]

Returns a fileobj. When that is closed, delete file

duplicity.dup_temp.get_fileobj_duppath(dirpath, partname, permname, remname, overwrite=False)[source]

Return a file object open for writing, will write to filename

Data will be processed and written to a temporary file. When the return fileobject is closed, rename to final position. filename must be a recognizable duplicity data file.

duplicity.dup_temp.new_tempduppath(parseresults)[source]

Return a new TempDupPath, using settings from parseresults

duplicity.dup_temp.new_temppath()[source]

Return a new TempPath

duplicity.dup_threading module

Duplicity specific but otherwise generic threading interfaces and utilities.

(Not called “threading” because we do not want to conflict with the standard threading module.)

class duplicity.dup_threading.Value(value=None)[source]

Bases: object

A thread-safe container of a reference to an object (but not the object itself).

In particular this means it is safe to:

value.set(1)

But unsafe to:

value.get()[‘key’] = value

Where the latter must be done using something like:

def _setprop():

value.get()[‘key’] = value

with_lock(value, _setprop)

Operations such as increments are best done as:

value.transform(lambda val: val + 1)

__init__(value=None)[source]

Initialuze with the given value.

acquire()[source]

Acquire this Value for mutually exclusive access. Only ever needed when calling code must perform operations that cannot be done with get(), set() or transform().

get()[source]

Returns the value protected by this Value.

release()[source]

Release this Value for mutually exclusive access.

set(value)[source]

Resets the value protected by this Value.

transform(fn)[source]

Call fn with the current value as the parameter, and reset the value to the return value of fn.

During the execution of fn, all other access to this Value is prevented.

If fn raised an exception, the value is not reset.

Returns the value returned by fn, or raises the exception raised by fn.

duplicity.dup_threading.async_split(fn)[source]

Splits the act of calling the given function into one front-end part for waiting on the result, and a back-end part for performing the work in another thread.

Returns (waiter, caller) where waiter is a function to be called in order to wait for the results of an asynchronous invokation of fn to complete, returning fn’s result or propagating it’s exception.

Caller is the function to call in a background thread in order to execute fn asynchronously. Caller will return (success, waiter) where success is a boolean indicating whether the function suceeded (did NOT raise an exception), and waiter is the waiter that was originally returned by the call to async_split().

duplicity.dup_threading.interruptably_wait(cv, waitFor)[source]

cv - The threading.Condition instance to wait on test - Callable returning a boolean to indicate whether

the criteria being waited on has been satisfied.

Perform a wait on a condition such that it is keyboard interruptable when done in the main thread. Due to Python limitations as of <= 2.5, lock acquisition and conditions waits are not interruptable when performed in the main thread.

Currently, this comes at a cost additional CPU use, compared to a normal wait. Future implementations may be more efficient if the underlying python supports it.

The condition must be acquired.

This function should only be used on conditions that are never expected to be acquired for extended periods of time, or the lock-acquire of the underlying condition could cause an uninterruptable state despite the efforts of this function.

There is no equivalent for acquireing a lock, as that cannot be done efficiently.

Example:

Instead of:

cv.acquire() while not thing_done:

cv.wait(someTimeout)

cv.release()

do:

cv.acquire() interruptable_condwait(cv, lambda: thing_done) cv.release()

duplicity.dup_threading.require_threading(reason=None)[source]

Assert that threading is required for operation to continue. Raise an appropriate exception if this is not the case.

Reason specifies an optional reason why threading is required, which will be used for error reporting in case threading is not supported.

duplicity.dup_threading.thread_module()[source]

Returns the thread module, or dummy_thread if threading is not supported.

duplicity.dup_threading.threading_module()[source]

Returns the threading module, or dummy_thread if threading is not supported.

duplicity.dup_threading.threading_supported()[source]

Returns whether threading is supported on the system we are running on.

duplicity.dup_threading.with_lock(lock, fn)[source]

Call fn with lock acquired. Guarantee that lock is released upon the return of fn.

Returns the value returned by fn, or raises the exception raised by fn.

(Lock can actually be anything responding to acquire() and release().)

duplicity.dup_time module

Provide time related exceptions and functions

exception duplicity.dup_time.TimeException[source]

Bases: Exception

duplicity.dup_time.cmp(time1, time2)[source]

Compare time1 and time2 and return -1, 0, or 1

duplicity.dup_time.genstrtotime(timestr, override_curtime=None)[source]

Convert a generic time string to a time in seconds

duplicity.dup_time.gettzd(dstflag)[source]

Return w3’s timezone identification string.

Expresed as [+/-]hh:mm. For instance, PST is -08:00. Zone is coincides with what localtime(), etc., use.

duplicity.dup_time.intstringtoseconds(interval_string)[source]

Convert a string expressing an interval (e.g. “4D2s”) to seconds

duplicity.dup_time.inttopretty(seconds)[source]

Convert num of seconds to readable string like “2 hours”.

duplicity.dup_time.setcurtime(time_in_secs=None)[source]

Sets the current time in curtime and curtimestr

duplicity.dup_time.setprevtime(time_in_secs)[source]

Sets the previous time in prevtime and prevtimestr

duplicity.dup_time.stringtopretty(timestring)[source]

Return pretty version of time given w3 time string

duplicity.dup_time.stringtotime(timestring)[source]

Return time in seconds from w3 or duplicity timestring

If there is an error parsing the string, or it doesn’t look like a valid datetime string, return None.

duplicity.dup_time.timetopretty(timeinseconds)[source]

Return pretty version of time

duplicity.dup_time.timetostring(timeinseconds)[source]

Return w3 or duplicity datetime compliant listing of timeinseconds

duplicity.dup_time.tzdtoseconds(tzd)[source]

Given w3 compliant TZD, return how far ahead UTC is

duplicity.errors module

Error/exception classes that do not fit naturally anywhere else.

exception duplicity.errors.BackendException(msg, code=50)[source]

Bases: DuplicityError

Raised to indicate a backend specific problem.

__init__(msg, code=50)[source]
exception duplicity.errors.BadVolumeException[source]

Bases: DuplicityError

exception duplicity.errors.ConflictingScheme[source]

Bases: DuplicityError

Raised to indicate an attempt was made to register a backend for a scheme for which there is already a backend registered.

exception duplicity.errors.DuplicityError[source]

Bases: Exception

exception duplicity.errors.FatalBackendException(msg, code=50)[source]

Bases: BackendException

Raised to indicate a backend failed fatally.

exception duplicity.errors.InvalidBackendURL[source]

Bases: UserError

Raised to indicate a URL was not a valid backend URL.

exception duplicity.errors.NotSupported[source]

Bases: DuplicityError

Exception raised when an action cannot be completed because some particular feature is not supported by the environment.

exception duplicity.errors.TemporaryLoadException(msg, code=50)[source]

Bases: BackendException

Raised to indicate a temporary issue on the backend. Duplicity should back off for a bit and try again.

exception duplicity.errors.UnsupportedBackendScheme(url)[source]

Bases: InvalidBackendURL, UserError

Raised to indicate that a backend URL was parsed successfully as a URL, but was not supported.

__init__(url)[source]
exception duplicity.errors.UserError[source]

Bases: DuplicityError

Subclasses use this in their inheritance hierarchy to signal that the error is a user generated one, and that it is therefore typically unsuitable to display a full stack trace.

duplicity.file_naming module

Produce and parse the names of duplicity’s backup files

class duplicity.file_naming.ParseResults(type, manifest=None, volume_number=None, time=None, start_time=None, end_time=None, encrypted=None, compressed=None, partial=False)[source]

Bases: object

Hold information taken from a duplicity filename

__init__(type, manifest=None, volume_number=None, time=None, start_time=None, end_time=None, encrypted=None, compressed=None, partial=False)[source]
duplicity.file_naming.from_base36(s)[source]

Convert string s in base 36 to long int

duplicity.file_naming.get(type, volume_number=None, manifest=False, encrypted=False, gzipped=False, partial=False)[source]

Return duplicity filename of specified type

type can be “full”, “inc”, “full-sig”, or “new-sig”. volume_number can be given with the full and inc types. If manifest is true the filename is of a full or inc manifest file.

duplicity.file_naming.get_suffix(encrypted, gzipped)[source]

Return appropriate suffix depending on status of encryption or compression or neither.

duplicity.file_naming.parse(filename)[source]

Parse duplicity filename, return None or ParseResults object

duplicity.file_naming.prepare_regex(force=False)[source]
duplicity.file_naming.to_base36(n)[source]

Return string representation of n in base 36 (use 0-9 and a-z)

duplicity.filechunkio module
class duplicity.filechunkio.FileChunkIO(name, mode='r', closefd=True, offset=0, bytes=None, *args, **kwargs)[source]

Bases: FileIO

A class that allows you reading only a chunk of a file.

__init__(name, mode='r', closefd=True, offset=0, bytes=None, *args, **kwargs)[source]

Open a file chunk. The mode can only be ‘r’ for reading. Offset is the amount of bytes that the chunks starts after the real file’s first byte. Bytes defines the amount of bytes the chunk has, which you can set to None to include the last byte of the real file.

read(n=-1)[source]

Read and return at most n bytes.

readall()[source]

Read all data from the chunk.

readinto(b)[source]

Same as RawIOBase.readinto().

seek(offset, whence=0)[source]

Move to a new chunk position.

tell()[source]

Current file position.

duplicity.globmatch module
exception duplicity.globmatch.FilePrefixError[source]

Bases: GlobbingError

Signals that a specified file doesn’t start with correct prefix

exception duplicity.globmatch.GlobbingError[source]

Bases: Exception

Something has gone wrong when parsing a glob string

duplicity.globmatch._glob_get_prefix_regexs(glob_str)[source]

Return list of regexps equivalent to prefixes of glob_str

duplicity.globmatch.glob_to_regex(pat)[source]

Returned regular expression equivalent to shell glob pat

Currently only the ?, , [], and * expressions are supported. Ranges like [a-z] are currently unsupported. There is no way to quote these special characters.

This function taken with minor modifications from efnmatch.py by Donovan Baarda.

duplicity.globmatch.select_fn_from_glob(glob_str, include, ignore_case=False)[source]

Return a function test_fn(path) which tests whether path matches glob, as per the Unix shell rules, taking as arguments a path, a glob string and include (0 indicating that the glob string is an exclude glob and 1 indicating that it is an include glob, returning:

0 - if the file should be excluded 1 - if the file should be included 2 - if the folder should be scanned for any included/excluded files None - if the selection function has nothing to say about the file

The basic idea is to turn glob_str into a regular expression, and just use the normal regular expression. There is a complication because the selection function should return ‘2’ (scan) for directories which may contain a file which matches the glob_str. So we break up the glob string into parts, and any file which matches an initial sequence of glob parts gets scanned.

Thanks to Donovan Baarda who provided some code which did some things similar to this.

Note: including a folder implicitly includes everything within it.

duplicity.gpg module

duplicity’s gpg interface, builds upon Frank Tobin’s GnuPGInterface which is now patched with some code for iterative threaded execution see duplicity’s README for details

exception duplicity.gpg.GPGError[source]

Bases: Exception

Indicate some GPG Error

class duplicity.gpg.GPGFile(encrypt, encrypt_path, profile)[source]

Bases: object

File-like object that encrypts decrypts another file on the fly

__init__(encrypt, encrypt_path, profile)[source]

GPGFile initializer

If recipients is set, use public key encryption and encrypt to the given keys. Otherwise, use symmetric encryption.

encrypt_path is the Path of the gpg encrypted file. Right now only symmetric encryption/decryption is supported.

If passphrase is false, do not set passphrase - GPG program should prompt for it.

close()[source]
get_signature()[source]

Return keyID of signature, or None if none

gpg_failed()[source]
read(length=-1)[source]
seek(offset)[source]
set_signature()[source]

Set self.signature to signature keyID

This only applies to decrypted files. If the file was not signed, set self.signature to None.

tell()[source]
write(buf)[source]
class duplicity.gpg.GPGProfile(passphrase=None, sign_key=None, recipients=None, hidden_recipients=None)[source]

Bases: object

Just hold some GPG settings, avoid passing tons of arguments

__init__(passphrase=None, sign_key=None, recipients=None, hidden_recipients=None)[source]

Set all data with initializer

passphrase is the passphrase. If it is None (not “”), assume it hasn’t been set. sign_key can be blank if no signing is indicated, and recipients should be a list of keys. For all keys, the format should be an hex key like ‘AA0E73D2’.

_version_re = re.compile(b'^gpg.*\\(GnuPG(?:/MacGPG2)?\\) (?P<maj>[0-9]+)\\.(?P<min>[0-9]+)\\.(?P<bug>[0-9]+)(-.+)?$')
get_gpg_version(binary)[source]
rc(flags=0)

Compile a regular expression pattern, returning a Pattern object.

duplicity.gpg.GPGWriteFile(block_iter, filename, profile, size=209715200, max_footer_size=16384)[source]

Write GPG compressed file of given size

This function writes a gpg compressed file by reading from the input iter and writing to filename. When it has read an amount close to the size limit, it “tops off” the incoming data with incompressible data, to try to hit the limit exactly.

block_iter should have methods .next(size), which returns the next block of data, which should be at most size bytes long. Also .get_footer() returns a string to write at the end of the input file. The footer should have max length max_footer_size.

Because gpg uses compression, we don’t assume that putting bytes_in bytes into gpg will result in bytes_out = bytes_in out. However, do assume that bytes_out <= bytes_in approximately.

Returns true if succeeded in writing until end of block_iter.

duplicity.gpg.GzipWriteFile(block_iter, filename, size=209715200, gzipped=True)[source]

Write gzipped compressed file of given size

This is like the earlier GPGWriteFile except it writes a gzipped file instead of a gpg’d file. This function is somewhat out of place, because it doesn’t deal with GPG at all, but it is very similar to GPGWriteFile so they might as well be defined together.

The input requirements on block_iter and the output is the same as GPGWriteFile (returns true if wrote until end of block_iter).

duplicity.gpg.PlainWriteFile(block_iter, filename, size=209715200, gzipped=False)[source]

Write plain uncompressed file of given size

This is like the earlier GPGWriteFile except it writes a gzipped file instead of a gpg’d file. This function is somewhat out of place, because it doesn’t deal with GPG at all, but it is very similar to GPGWriteFile so they might as well be defined together.

The input requirements on block_iter and the output is the same as GPGWriteFile (returns true if wrote until end of block_iter).

duplicity.gpg.get_hash(hash, path, hex=1)[source]

Return hash of path

hash should be “MD5” or “SHA1”. The output will be in hexadecimal form if hex is true, and in text (base64) otherwise.

duplicity.gpginterface module

Interface to GNU Privacy Guard (GnuPG)

!!! This was renamed to gpginterface.py.

Please refer to duplicity’s README for the reason. !!!

gpginterface is a Python module to interface with GnuPG which based on GnuPGInterface by Frank J. Tobin. It concentrates on interacting with GnuPG via filehandles, providing access to control GnuPG via versatile and extensible means.

This module is based on GnuPG::Interface, a Perl module by the same author.

Normally, using this module will involve creating a GnuPG object, setting some options in it’s ‘options’ data member (which is of type Options), creating some pipes to talk with GnuPG, and then calling the run() method, which will connect those pipes to the GnuPG process. run() returns a Process object, which contains the filehandles to talk to GnuPG with.

Example code:

>>> import gpginterface
>>>
>>> plaintext  = b"Three blind mice"
>>> passphrase = "This is the passphrase"
>>>
>>> gnupg = gpginterface.GnuPG()
>>> gnupg.options.armor = 1
>>> gnupg.options.meta_interactive = 0
>>> gnupg.options.extra_args.append('--no-secmem-warning')
>>>
>>> # Normally we might specify something in
>>> # gnupg.options.recipients, like
>>> # gnupg.options.recipients = [ '0xABCD1234', 'bob@foo.bar' ]
>>> # but since we're doing symmetric-only encryption, it's not needed.
>>> # If you are doing standard, public-key encryption, using
>>> # --encrypt, you will need to specify recipients before
>>> # calling gnupg.run()
>>>
>>> # First we'll encrypt the test_text input symmetrically
>>> p1 = gnupg.run(['--symmetric'],
...                create_fhs=['stdin', 'stdout', 'passphrase'])
>>>
>>> ret = p1.handles['passphrase'].write(passphrase)
>>> p1.handles['passphrase'].close()
>>>
>>> ret = p1.handles['stdin'].write(plaintext)
>>> p1.handles['stdin'].close()
>>>
>>> ciphertext = p1.handles['stdout'].read()
>>> p1.handles['stdout'].close()
>>>
>>> # process cleanup
>>> p1.wait()
>>>
>>> # Now we'll decrypt what we just encrypted it,
>>> # using the convience method to get the
>>> # passphrase to GnuPG
>>> gnupg.passphrase = passphrase
>>>
>>> p2 = gnupg.run(['--decrypt'], create_fhs=['stdin', 'stdout'])
>>>
>>> ret = p2.handles['stdin'].write(ciphertext)
>>> p2.handles['stdin'].close()
>>>
>>> decrypted_plaintext = p2.handles['stdout'].read()
>>> p2.handles['stdout'].close()
>>>
>>> # process cleanup
>>> p2.wait()
>>>
>>> # Our decrypted plaintext:
>>> decrypted_plaintext
b'Three blind mice'
>>>
>>> # ...and see it's the same as what we orignally encrypted
>>> assert decrypted_plaintext == plaintext,           "GnuPG decrypted output does not match original input"
>>>
>>>
>>> ##################################################
>>> # Now let's trying using run()'s attach_fhs paramter
>>>
>>> # we're assuming we're running on a unix...
>>> infp = open('/etc/manpaths', 'rb')
>>>
>>> p1 = gnupg.run(['--symmetric'], create_fhs=['stdout'],
...                                 attach_fhs={'stdin': infp})
>>>
>>> # GnuPG will read the stdin from /etc/motd
>>> ciphertext = p1.handles['stdout'].read()
>>>
>>> # process cleanup
>>> p1.wait()
>>>
>>> # Now let's run the output through GnuPG
>>> # We'll write the output to a temporary file,
>>> import tempfile
>>> temp = tempfile.TemporaryFile()
>>>
>>> p2 = gnupg.run(['--decrypt'], create_fhs=['stdin'],
...                               attach_fhs={'stdout': temp})
>>>
>>> # give GnuPG our encrypted stuff from the first run
>>> ret = p2.handles['stdin'].write(ciphertext)
>>> p2.handles['stdin'].close()
>>>
>>> # process cleanup
>>> p2.wait()
>>>
>>> # rewind the tempfile and see what GnuPG gave us
>>> ret = temp.seek(0)
>>> decrypted_plaintext = temp.read()
>>>
>>> # compare what GnuPG decrypted with our original input
>>> ret = infp.seek(0)
>>> input_data = infp.read()
>>> assert decrypted_plaintext == input_data,            "GnuPG decrypted output does not match original input"

To do things like public-key encryption, simply pass do something like:

gnupg.passphrase = ‘My passphrase’ gnupg.options.recipients = [ ‘bob@foobar.com’ ] gnupg.run( [’–sign’, ‘–encrypt’], create_fhs=…, attach_fhs=…)

Here is an example of subclassing gpginterface.GnuPG, so that it has an encrypt_string() method that returns ciphertext.

>>> import gpginterface
>>>
>>> class MyGnuPG(gpginterface.GnuPG):
...
...     def __init__(self):
...         super().__init__()
...         self.setup_my_options()
...
...     def setup_my_options(self):
...         self.options.armor = 1
...         self.options.meta_interactive = 0
...         self.options.extra_args.append('--no-secmem-warning')
...
...     def encrypt_string(self, string, recipients):
...        gnupg.options.recipients = recipients   # a list!
...
...        proc = gnupg.run(['--encrypt'], create_fhs=['stdin', 'stdout'])
...
...        proc.handles['stdin'].write(string)
...        proc.handles['stdin'].close()
...
...        output = proc.handles['stdout'].read()
...        proc.handles['stdout'].close()
...
...        proc.wait()
...        return output
...
>>> gnupg = MyGnuPG()
>>> ciphertext = gnupg.encrypt_string(b"The secret", ['E477C232'])
>>>
>>> # just a small sanity test here for doctest
>>> import types
>>> assert isinstance(ciphertext, bytes),            "What GnuPG gave back is not bytes!"

Here is an example of generating a key: >>> import gpginterface >>> gnupg = gpginterface.GnuPG() >>> gnupg.options.meta_interactive = 0 >>> >>> # We will be creative and use the logger filehandle to capture >>> # what GnuPG says this time, instead stderr; no stdout to listen to, >>> # but we capture logger to surpress the dry-run command. >>> # We also have to capture stdout since otherwise doctest complains; >>> # Normally you can let stdout through when generating a key. >>> >>> proc = gnupg.run([’–gen-key’], create_fhs=[‘stdin’, ‘stdout’, … ‘logger’]) >>> >>> ret = proc.handles[‘stdin’].write(b’’’Key-Type: DSA … Key-Length: 1024 … # We are only testing syntax this time, so dry-run … %dry-run … Subkey-Type: ELG-E … Subkey-Length: 1024 … Name-Real: Joe Tester … Name-Comment: with stupid passphrase … Name-Email: joe@foo.bar … Expire-Date: 2y … Passphrase: abc … %pubring foo.pub … %secring foo.sec … ‘’’) >>> >>> proc.handles[‘stdin’].close() >>> >>> report = proc.handles[‘logger’].read() >>> proc.handles[‘logger’].close() >>> >>> proc.wait()

COPYRIGHT:

Copyright (C) 2001 Frank J. Tobin, ftobin@neverending.org

LICENSE:

This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version.

This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details.

You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA or see http://www.gnu.org/copyleft/lesser.html

class duplicity.gpginterface.GnuPG[source]

Bases: object

Class instances represent GnuPG.

Instance attributes of a GnuPG object are:

  • call – string to call GnuPG with. Defaults to “gpg”

  • passphrase – Since it is a common operation to pass in a passphrase to GnuPG, and working with the passphrase filehandle mechanism directly can be mundane, if set, the passphrase attribute works in a special manner. If the passphrase attribute is set, and no passphrase file object is sent in to run(), then GnuPG instnace will take care of sending the passphrase to GnuPG, the executable instead of having the user sent it in manually.

  • options – Object of type gpginterface.Options. Attribute-setting in options determines the command-line options used when calling GnuPG.

__init__()[source]
_as_child(process, gnupg_commands, args)[source]

Stuff run after forking in child

_as_parent(process)[source]

Stuff run after forking in parent

_attach_fork_exec(gnupg_commands, args, create_fhs, attach_fhs)[source]

This is like run(), but without the passphrase-helping (note that run() calls this).

run(gnupg_commands, args=None, create_fhs=None, attach_fhs=None)[source]

Calls GnuPG with the list of string commands gnupg_commands, complete with prefixing dashes. For example, gnupg_commands could be ‘[”–sign”, “–encrypt”]’ Returns a gpginterface.Process object.

args is an optional list of GnuPG command arguments (not options), such as keyID’s to export, filenames to process, etc.

create_fhs is an optional list of GnuPG filehandle names that will be set as keys of the returned Process object’s ‘handles’ attribute. The generated filehandles can be used to communicate with GnuPG via standard input, standard output, the status-fd, passphrase-fd, etc.

Valid GnuPG filehandle names are:
  • stdin

  • stdout

  • stderr

  • status

  • passphase

  • command

  • logger

The purpose of each filehandle is described in the GnuPG documentation.

attach_fhs is an optional dictionary with GnuPG filehandle names mapping to opened files. GnuPG will read or write to the file accordingly. For example, if ‘my_file’ is an opened file and ‘attach_fhs[stdin] is my_file’, then GnuPG will read its standard input from my_file. This is useful if you want GnuPG to read/write to/from an existing file. For instance:

f = open(“encrypted.gpg”) gnupg.run([”–decrypt”], attach_fhs={‘stdin’: f})

Using attach_fhs also helps avoid system buffering issues that can arise when using create_fhs, which can cause the process to deadlock.

If not mentioned in create_fhs or attach_fhs, GnuPG filehandles which are a std* (stdin, stdout, stderr) are defaulted to the running process’ version of handle. Otherwise, that type of handle is simply not used when calling GnuPG. For example, if you do not care about getting data from GnuPG’s status filehandle, simply do not specify it.

run() returns a Process() object which has a ‘handles’ which is a dictionary mapping from the handle name (such as ‘stdin’ or ‘stdout’) to the respective newly-created FileObject connected to the running GnuPG process. For instance, if the call was

process = gnupg.run([”–decrypt”], stdin=1)

after run returns ‘process.handles[“stdin”]’ is a FileObject connected to GnuPG’s standard input, and can be written to.

class duplicity.gpginterface.Options[source]

Bases: object

Objects of this class encompass options passed to GnuPG. This class is responsible for determining command-line arguments which are based on options. It can be said that a GnuPG object has-a Options object in its options attribute.

Attributes which correlate directly to GnuPG options:

Each option here defaults to false or None, and is described in GnuPG documentation.

Booleans (set these attributes to booleans)

  • armor

  • no_greeting

  • no_verbose

  • quiet

  • batch

  • always_trust

  • rfc1991

  • openpgp

  • force_v3_sigs

  • no_options

  • textmode

Strings (set these attributes to strings)

  • homedir

  • default_key

  • comment

  • compress_algo

  • options

Lists (set these attributes to lists)

  • recipients (*NOTE* plural of ‘recipient’)

  • encrypt_to

Meta options

Meta options are options provided by this module that do not correlate directly to any GnuPG option by name, but are rather bundle of options used to accomplish a specific goal, such as obtaining compatibility with PGP 5. The actual arguments each of these reflects may change with time. Each defaults to false unless otherwise specified.

meta_pgp_5_compatible – If true, arguments are generated to try to be compatible with PGP 5.x.

meta_pgp_2_compatible – If true, arguments are generated to try to be compatible with PGP 2.x.

meta_interactive – If false, arguments are generated to try to help the using program use GnuPG in a non-interactive environment, such as CGI scripts. Default is true.

extra_args – Extra option arguments may be passed in via the attribute extra_args, a list.

>>> import gpginterface
>>>
>>> gnupg = gpginterface.GnuPG()
>>> gnupg.options.armor = 1
>>> gnupg.options.recipients = ['Alice', 'Bob']
>>> gnupg.options.extra_args = ['--no-secmem-warning']
>>>
>>> # no need for users to call this normally; just for show here
>>> gnupg.options.get_args()
['--armor', '--recipient', 'Alice', '--recipient', 'Bob', '--no-secmem-warning']
__init__()[source]
get_args()[source]

Generate a list of GnuPG arguments based upon attributes.

get_meta_args()[source]

Get a list of generated meta-arguments

get_standard_args()[source]

Generate a list of standard, non-meta or extra arguments

class duplicity.gpginterface.Pipe(parent, child, direct)[source]

Bases: object

simple struct holding stuff about pipes we use

__init__(parent, child, direct)[source]
class duplicity.gpginterface.Process[source]

Bases: object

Objects of this class encompass properties of a GnuPG process spawned by GnuPG.run().

# gnupg is a GnuPG object process = gnupg.run( [ ‘–decrypt’ ], stdout = 1 ) out = process.handles[‘stdout’].read() … os.waitpid( process.pid, 0 )

Data Attributes

handles – This is a map of filehandle-names to the file handles, if any, that were requested via run() and hence are connected to the running GnuPG process. Valid names of this map are only those handles that were requested.

pid – The PID of the spawned GnuPG process. Useful to know, since once should call os.waitpid() to clean up the process, especially if multiple calls are made to run().

__init__()[source]
wait()[source]

Wait on threaded_waitpid to exit and examine results. Will raise an IOError if the process exits non-zero.

duplicity.gpginterface.threaded_waitpid(process)[source]

When started as a thread with the Process object, thread will execute an immediate waitpid() against the process pid and will collect the process termination info. This will allow us to reap child processes as soon as possible, thus freeing resources quickly.

duplicity.lazy module

Define some lazy data structures and functions acting on them

class duplicity.lazy.ITRBranch[source]

Bases: object

Helper class for IterTreeReducer above

There are five stub functions below: start_process, end_process, branch_process, fast_process, and can_fast_process. A class that subclasses this one will probably fill in these functions to do more.

base_index = None
branch_process(branch)[source]

Process a branch right after it is finished (stub)

call_end_proc()[source]

Runs the end_process on self, checking for errors

can_fast_process(*args)[source]

True if object can be processed without new branch (stub)

caught_exception = None
end_process()[source]

Do any final processing before leaving branch (stub)

fast_process(*args)[source]

Process args without new child branch (stub)

finished = None
index = None
log_prev_error(index)[source]

Call function if no pending exception

on_error(exc, *args)[source]

This is run on any exception in start/end-process

start_process(*args)[source]

Do some initial processing (stub)

start_successful = None
class duplicity.lazy.Iter[source]

Bases: object

Hold static methods for the manipulation of lazy iterators

static And(iter)[source]

True if all elements in iterator are true. Short circuiting

static Or(iter)[source]

True if any element in iterator is true. Short circuiting

static cat(*iters)[source]

Lazily concatenate iterators

static cat2(iter_of_iters)[source]

Lazily concatenate iterators, iterated by big iterator

static empty(iter)[source]

True if iterator has length 0

static equal(iter1, iter2, verbose=None, operator=<function Iter.<lambda>>)[source]

True if iterator 1 has same elements as iterator 2

Use equality operator, or == if it is unspecified.

static filter(predicate, iterator)[source]

Like filter in a lazy functional programming language

static foldl(f, default, iter)[source]

the fundamental list iteration operator..

static foldr(f, default, iter)[source]

foldr the “fundamental list recursion operator”?

static foreach(function, iterator)[source]

Run function on each element in iterator

static len(iter)[source]

Return length of iterator

static map(function, iterator)[source]

Like map in a lazy functional programming language

static multiplex(iter, num_of_forks, final_func=None, closing_func=None)[source]

Split a single iterater into a number of streams

The return val will be a list with length num_of_forks, each of which will be an iterator like iter. final_func is the function that will be called on each element in iter just as it is being removed from the buffer. closing_func is called when all the streams are finished.

class duplicity.lazy.IterMultiplex2(iter)[source]

Bases: object

Multiplex an iterator into 2 parts

This is a special optimized case of the Iter.multiplex function, used when there is no closing_func or final_func, and we only want to split it into 2. By profiling, this is a time sensitive class.

__init__(iter)[source]
yielda()[source]

Return first iterator

yieldb()[source]

Return second iterator

class duplicity.lazy.IterTreeReducer(branch_class, branch_args)[source]

Bases: object

Tree style reducer object for iterator - stolen from rdiff-backup

The indicies of a RORPIter form a tree type structure. This class can be used on each element of an iter in sequence and the result will be as if the corresponding tree was reduced. This tries to bridge the gap between the tree nature of directories, and the iterator nature of the connection between hosts and the temporal order in which the files are processed.

This will usually be used by subclassing ITRBranch below and then call the initializer below with the new class.

Finish()[source]

Call at end of sequence to tie everything up

__call__(*args)[source]

Process args, where args[0] is current position in iterator

Returns true if args successfully processed, false if index is not in the current tree and thus the final result is available.

Also note below we set self.index after doing the necessary start processing, in case there is a crash in the middle.

__init__(branch_class, branch_args)[source]

ITR initializer

add_branch()[source]

Return branch of type self.branch_class, add to branch list

finish_branches(index)[source]

Run Finish() on all branches index has passed

When we pass out of a branch, delete it and process it with the parent. The innermost branches will be the last in the list. Return None if we are out of the entire tree, and 1 otherwise.

process_w_branch(index, branch, args)[source]

Run start_process on latest branch

duplicity.librsync module

Provides a high-level interface to some librsync functions

This is a python wrapper around the lower-level _librsync module, which is written in C. The goal was to use C as little as possible…

class duplicity.librsync.DeltaFile(signature, new_file)[source]

Bases: LikeFile

File-like object which incrementally generates a librsync delta

__init__(signature, new_file)[source]

DeltaFile initializer - call with signature and new file

Signature can either be a string or a file with read() and close() methods. New_file also only needs to have read() and close() methods. It will be closed when self is closed.

class duplicity.librsync.LikeFile(infile, need_seek=None)[source]

Bases: object

File-like object used by SigFile, DeltaFile, and PatchFile

__init__(infile, need_seek=None)[source]

LikeFile initializer - zero buffers, set eofs off

_add_to_inbuf()[source]

Make sure len(self.inbuf) >= blocksize

_add_to_outbuf_once()[source]

Add one cycle’s worth of output to self.outbuf

check_file(file, need_seek=None)[source]

Raise type error if file doesn’t have necessary attributes

close()[source]

Close infile

maker = None
mode = 'rb'
read(length=-1)[source]

Build up self.outbuf, return first length bytes

class duplicity.librsync.PatchedFile(basis_file, delta_file)[source]

Bases: LikeFile

File-like object which applies a librsync delta incrementally

__init__(basis_file, delta_file)[source]

PatchedFile initializer - call with basis delta

Here basis_file must be a true Python file, because we may need to seek() around in it a lot, and this is done in C. delta_file only needs read() and close() methods.

class duplicity.librsync.SigFile(infile, blocksize=duplicity._librsync.RS_DEFAULT_BLOCK_LEN)[source]

Bases: LikeFile

File-like object which incrementally generates a librsync signature

__init__(infile, blocksize=duplicity._librsync.RS_DEFAULT_BLOCK_LEN)[source]

SigFile initializer - takes basis file

basis file only needs to have read() and close() methods. It will be closed when we come to the end of the signature.

class duplicity.librsync.SigGenerator(blocksize=duplicity._librsync.RS_DEFAULT_BLOCK_LEN)[source]

Bases: object

Calculate signature.

Input and output is same as SigFile, but the interface is like md5 module, not filelike object

__init__(blocksize=duplicity._librsync.RS_DEFAULT_BLOCK_LEN)[source]

Return new signature instance

getsig()[source]

Return signature over given data

process_buffer()[source]

Run self.buffer through sig_maker, add to self.sig_string

update(buf)[source]

Add buf to data that signature will be calculated over

exception duplicity.librsync.librsyncError[source]

Bases: Exception

Signifies error in internal librsync processing (bad signature, etc.)

underlying _librsync.librsyncError’s are regenerated using this class because the C-created exceptions are by default unPickleable. There is probably a way to fix this in _librsync, but this scheme was easier.

duplicity.log module

Log various messages depending on verbosity level

duplicity.log.Debug(s)[source]

Shortcut used for debug message (verbosity 9).

class duplicity.log.DetailFormatter[source]

Bases: Formatter

Formatter that creates messages in a syntax somewhat like syslog.

__init__()[source]

Initialize the formatter with specified format strings.

Initialize the formatter either with the specified format string, or a default as described above. Allow for specialized date formatting with the optional datefmt argument. If datefmt is omitted, you get an ISO8601-like (or RFC 3339-like) format.

Use a style parameter of ‘%’, ‘{’ or ‘$’ to specify that you want to use one of %-formatting, str.format() ({}) formatting or string.Template formatting in your format string.

Changed in version 3.2: Added the style parameter.

format(record)[source]

Format the specified record as text.

The record’s attribute dictionary is used as the operand to a string formatting operation which yields the returned string. Before formatting the dictionary, a couple of preparatory steps are carried out. The message attribute of the record is computed using LogRecord.getMessage(). If the formatting string uses the time (as determined by a call to usesTime(), formatTime() is called to format the event time. If there is exception information, it is formatted using formatException() and appended to the message.

duplicity.log.DupToLoggerLevel(verb)[source]

Convert duplicity level to the logging module’s system, where higher is more severe

class duplicity.log.ErrFilter(name='')[source]

Bases: Filter

Filter that only allows messages more important than warnings

filter(record)[source]

Determine if the specified record is to be logged.

Returns True if the record should be logged, or False otherwise. If deemed appropriate, the record may be modified in-place.

duplicity.log.Error(s, code=1, extra=None)[source]

Write error message

class duplicity.log.ErrorCode[source]

Bases: object

Enumeration class to hold error code values. These values should never change, as frontends rely upon them. Don’t use 0 or negative numbers. This code is returned by duplicity to indicate which error occurred via both exit code and log.

absolute_files_from = 72
backend_code_error = 55
backend_command_error = 54
backend_error = 50
backend_no_space = 53
backend_not_found = 52
backend_permission_denied = 51
backup_dir_doesnt_exist = 13
bad_archive_dir = 9
bad_encrypt_key = 81
bad_hidden_encrypt_key = 82
bad_request = 48
bad_sign_key = 80
bad_url = 8
boto_calling_format = 26
boto_lib_too_old = 25
boto_old_style = 24
cant_open_filelist = 7
command_line = 2
connection_failed = 38
deprecated_option = 10
dpbx_nologin = 47
empty_files_from = 73
enryption_mismatch = 45
exception = 30
file_prefix_error = 14
ftp_ncftp_missing = 27
ftp_ncftp_too_old = 28
ftps_lftp_missing = 43
generic = 1
get_freespace_failed = 34
get_ulimit_failed = 36
gio_not_available = 40
globbing_error = 15
gpg_failed = 31
hostname_mismatch = 3
inc_without_sigs = 17
maxopen_too_low = 37
mismatched_hash = 21
mismatched_manifests = 5
no_manifests = 4
no_restore_files = 20
no_sigs = 18
not_enough_freespace = 35
not_implemented = 33
pythonoptimize_set = 46
redundant_filter = 70
redundant_inclusion = 16
restart_file_not_found = 39
restore_path_exists = 11
restore_path_not_found = 19
s3_bucket_not_style = 32
s3_kms_no_id = 49
source_path_mismatch = 42
trailing_filter = 71
unreadable_manifests = 6
unsigned_volume = 22
user_error = 23
verify_dir_doesnt_exist = 12
volume_wrong_size = 44
duplicity.log.FatalError(s, code=1, extra=None)[source]

Write fatal error message and exit

duplicity.log.Info(s, code=1, extra=None)[source]

Shortcut used for info messages (verbosity 5).

class duplicity.log.InfoCode[source]

Bases: object

Enumeration class to hold info code values. These values should never change, as frontends rely upon them. Don’t use 0 or negative numbers.

asynchronous_upload_begin = 12
asynchronous_upload_done = 14
collection_status = 3
diff_file_changed = 5
diff_file_deleted = 6
diff_file_new = 4
file_list = 10
generic = 1
patch_file_patching = 8
patch_file_writing = 7
progress = 2
skipping_socket = 15
synchronous_upload_begin = 11
synchronous_upload_done = 13
upload_progress = 16
duplicity.log.LevelName(level)[source]
duplicity.log.Log(s, verb_level, code=1, extra=None, force_print=False, transfer_progress=False)[source]

Write s to stderr if verbosity level low enough

duplicity.log.LoggerToDupLevel(verb)[source]

Convert logging module level to duplicity’s system, where lower is more severe

class duplicity.log.MachineFilter(name='')[source]

Bases: Filter

Filter that only allows levels that are consumable by other processes.

filter(record)[source]

Determine if the specified record is to be logged.

Returns True if the record should be logged, or False otherwise. If deemed appropriate, the record may be modified in-place.

class duplicity.log.MachineFormatter[source]

Bases: Formatter

Formatter that creates messages in a syntax easily consumable by other processes.

__init__()[source]

Initialize the formatter with specified format strings.

Initialize the formatter either with the specified format string, or a default as described above. Allow for specialized date formatting with the optional datefmt argument. If datefmt is omitted, you get an ISO8601-like (or RFC 3339-like) format.

Use a style parameter of ‘%’, ‘{’ or ‘$’ to specify that you want to use one of %-formatting, str.format() ({}) formatting or string.Template formatting in your format string.

Changed in version 3.2: Added the style parameter.

format(record)[source]

Format the specified record as text.

The record’s attribute dictionary is used as the operand to a string formatting operation which yields the returned string. Before formatting the dictionary, a couple of preparatory steps are carried out. The message attribute of the record is computed using LogRecord.getMessage(). If the formatting string uses the time (as determined by a call to usesTime(), formatTime() is called to format the event time. If there is exception information, it is formatted using formatException() and appended to the message.

duplicity.log.Notice(s)[source]

Shortcut used for notice messages (verbosity 3, the default).

class duplicity.log.OutFilter(name='')[source]

Bases: Filter

Filter that only allows warning or less important messages

filter(record)[source]

Determine if the specified record is to be logged.

Returns True if the record should be logged, or False otherwise. If deemed appropriate, the record may be modified in-place.

class duplicity.log.PrettyProgressFormatter[source]

Bases: Formatter

Formatter that overwrites previous progress lines on ANSI terminals

__init__()[source]

Initialize the formatter with specified format strings.

Initialize the formatter either with the specified format string, or a default as described above. Allow for specialized date formatting with the optional datefmt argument. If datefmt is omitted, you get an ISO8601-like (or RFC 3339-like) format.

Use a style parameter of ‘%’, ‘{’ or ‘$’ to specify that you want to use one of %-formatting, str.format() ({}) formatting or string.Template formatting in your format string.

Changed in version 3.2: Added the style parameter.

format(record)[source]

Format the specified record as text.

The record’s attribute dictionary is used as the operand to a string formatting operation which yields the returned string. Before formatting the dictionary, a couple of preparatory steps are carried out. The message attribute of the record is computed using LogRecord.getMessage(). If the formatting string uses the time (as determined by a call to usesTime(), formatTime() is called to format the event time. If there is exception information, it is formatted using formatException() and appended to the message.

last_record_was_progress = False
duplicity.log.PrintCollectionChangesInSet(col_stats, set_index, force_print=False)[source]

Prints changes in the specified set to the log

duplicity.log.PrintCollectionFileChangedStatus(col_stats, filepath, force_print=False)[source]

Prints a collection status to the log

duplicity.log.PrintCollectionStatus(col_stats, force_print=False)[source]

Prints a collection status to the log

duplicity.log.Progress(s, current, total=None)[source]

Shortcut used for progress messages (verbosity 5).

duplicity.log.TransferProgress(progress, eta, changed_bytes, elapsed, speed, stalled)[source]

Shortcut used for upload progress messages (verbosity 5).

duplicity.log.Warn(s, code=1, extra=None)[source]

Shortcut used for warning messages (verbosity 2)

class duplicity.log.WarningCode[source]

Bases: object

Enumeration class to hold warning code values. These values should never change, as frontends rely upon them. Don’t use 0 or negative numbers.

cannot_iterate = 8
cannot_process = 12
cannot_read = 10
cannot_stat = 9
ftp_ncftp_v320 = 7
generic = 1
incomplete_backup = 5
no_sig_for_time = 11
orphaned_backup = 6
orphaned_sig = 2
process_skipped = 13
unmatched_sig = 4
unnecessary_sig = 3
duplicity.log._ElapsedSecs2Str(secs)[source]
duplicity.log._RemainingSecs2Str(secs)[source]
duplicity.log.add_fd(fd)[source]

Add stream to which to write machine-readable logging

duplicity.log.add_file(filename)[source]

Add file to which to write machine-readable logging

duplicity.log.getverbosity()[source]

Get the verbosity level

duplicity.log.setup()[source]

Initialize logging

duplicity.log.setverbosity(verb)[source]

Set the verbosity level

duplicity.log.shutdown()[source]

Cleanup and flush loggers

duplicity.manifest module

Create and edit manifest for session contents

class duplicity.manifest.Manifest(fh=None)[source]

Bases: object

List of volumes and information about each one

__init__(fh=None)[source]

Create blank Manifest

@param fh: fileobj for manifest @type fh: DupPath

@rtype: Manifest @return: manifest

add_volume_info(vi)[source]

Add volume info vi to manifest and write to manifest

@param vi: volume info to add @type vi: VolumeInfo

@return: void

check_dirinfo()[source]

Return None if dirinfo is the same, otherwise error message

Does not raise an error message if hostname or local_dirname are not available.

@rtype: string @return: None or error message

del_volume_info(vol_num)[source]

Remove volume vol_num from the manifest

@param vol_num: volume number to delete @type vi: int

@return: void

from_string(s)[source]

Initialize self from string s, return self

get_containing_volumes(index_prefix)[source]

Return list of volume numbers that may contain index_prefix

get_files_changed()[source]
set_dirinfo()[source]

Set information about directory from config, and write to manifest file.

@rtype: Manifest @return: manifest

set_files_changed_info(files_changed)[source]
to_string()[source]

Return string version of self (just concatenate vi strings)

@rtype: string @return: self in string form

write_to_path(path)[source]

Write string version of manifest to given path

exception duplicity.manifest.ManifestError[source]

Bases: Exception

Exception raised when problem with manifest

duplicity.manifest.Quote(s)[source]

Return quoted version of s safe to put in a manifest or volume info

duplicity.manifest.Unquote(quoted_string)[source]

Return original string from quoted_string produced by above

class duplicity.manifest.VolumeInfo[source]

Bases: object

Information about a single volume

__init__()[source]

VolumeInfo initializer

contains(index_prefix, recursive=1)[source]

Return true if volume might contain index

If recursive is true, then return true if any index starting with index_prefix could be contained. Otherwise, just check if index_prefix itself is between starting and ending indicies.

from_string(s)[source]

Initialize self from string s as created by to_string

get_best_hash()[source]

Return pair (hash_type, hash_data)

SHA1 is the best hash, and MD5 is the second best hash. None is returned if no hash is available.

set_hash(hash_name, data)[source]

Set the value of hash hash_name (e.g. “MD5”) to data

set_info(vol_number, start_index, start_block, end_index, end_block)[source]

Set essential VolumeInfo information, return self

Call with starting and ending paths stored in the volume. If a multivol diff gets split between volumes, count it as being part of both volumes.

to_string()[source]

Return nicely formatted string reporting all information

exception duplicity.manifest.VolumeInfoError[source]

Bases: Exception

Raised when there is a problem initializing a VolumeInfo from string

duplicity.manifest.maybe_chr(ch)[source]
duplicity.patchdir module
class duplicity.patchdir.IndexedTuple(index, sequence)[source]

Bases: object

Like a tuple, but has .index (used previously by collate_iters)

__init__(index, sequence)[source]
class duplicity.patchdir.Multivol_Filelike(tf, tar_iter, tarinfo_list, index)[source]

Bases: object

Emulate a file like object from multivols

Maintains a buffer about the size of a volume. When it is read() to the end, pull in more volumes as desired.

__init__(tf, tar_iter, tarinfo_list, index)[source]

Initializer. tf is TarFile obj, tarinfo is first tarinfo

addtobuffer()[source]

Add next chunk to buffer

close()[source]

If not at end, read remaining data

read(length=-1)[source]

Read length bytes from file

duplicity.patchdir.Patch(base_path, difftar_fileobj)[source]

Patch given base_path and file object containing delta

exception duplicity.patchdir.PatchDirException[source]

Bases: Exception

duplicity.patchdir.Patch_from_iter(base_path, fileobj_iter, restrict_index=())[source]

Patch given base_path and iterator of delta file objects

class duplicity.patchdir.PathPatcher(base_path)[source]

Bases: ITRBranch

Used by DirPatch, process the given basis and diff

__init__(base_path)[source]

Set base_path, Path of root of tree

can_fast_process(index, basis_path, diff_ropath)[source]

No need to recurse if diff_ropath isn’t a directory

end_process()[source]

Copy directory permissions when leaving tree

fast_process(index, basis_path, diff_ropath)[source]

For use when neither is a directory

start_process(index, basis_path, diff_ropath)[source]

Start processing when diff_ropath is a directory

class duplicity.patchdir.ROPath_IterWriter(base_path)[source]

Bases: ITRBranch

Used in Write_ROPaths above

We need to use an ITR because we have to update the permissions/times of directories after we write the files in them.

__init__(base_path)[source]

Set base_path, Path of root of tree

can_fast_process(index, ropath)[source]

Can fast process (no recursion) if ropath isn’t a directory

end_process()[source]

Update information of a directory when leaving it

fast_process(index, ropath)[source]

Write non-directory ropath to destination

start_process(index, ropath)[source]

Write ropath. Only handles the directory case

class duplicity.patchdir.TarFile_FromFileobjs(fileobj_iter)[source]

Bases: object

Like a tarfile.TarFile iterator, but read from multiple fileobjs

__init__(fileobj_iter)[source]

Make new tarinfo iterator

fileobj_iter should be an iterator of file objects opened for reading. They will be closed at end of reading.

__next__()[source]
extractfile(tarinfo)[source]

Return data associated with given tarinfo

set_tarfile()[source]

Set tarfile from next file object, or raise StopIteration

duplicity.patchdir.Write_ROPaths(base_path, rop_iter)[source]

Write out ropaths in rop_iter starting at base_path

Returns 1 if something was actually written, 0 otherwise.

duplicity.patchdir.collate_iters(iter_list)[source]

Collate iterators by index

Input is a list of n iterators each of which must iterate elements with an index attribute. The elements must come out in increasing order, and the index should be a tuple itself.

The output is an iterator which yields tuples where all elements in the tuple have the same index, and the tuple has n elements in it. If any iterator lacks an element with that index, the tuple will have None in that spot.

duplicity.patchdir.difftar2path_iter(diff_tarfile)[source]

Turn file-like difftarobj into iterator of ROPaths

duplicity.patchdir.empty_iter()[source]
duplicity.patchdir.filter_path_iter(path_iter, index)[source]

Rewrite path elements of path_iter so they start with index

Discard any that doesn’t start with index, and remove the index prefix from the rest.

duplicity.patchdir.get_index_from_tarinfo(tarinfo)[source]

Return (index, difftype, multivol) pair from tarinfo object

duplicity.patchdir.integrate_patch_iters(iter_list)[source]

Combine a list of iterators of ropath patches

The iter_list should be sorted in patch order, and the elements in each iter_list need to be orderd by index. The output will be an iterator of the final ROPaths in index order.

duplicity.patchdir.normalize_ps(patch_sequence)[source]

Given an sequence of ROPath deltas, remove blank and unnecessary

The sequence is assumed to be in patch order (later patches apply to earlier ones). A patch is unnecessary if a later one doesn’t require it (for instance, any patches before a “delete” are unnecessary).

duplicity.patchdir.patch_diff_tarfile(base_path, diff_tarfile, restrict_index=())[source]

Patch given Path object using delta tarfile (as in tarfile.TarFile)

If restrict_index is set, ignore any deltas in diff_tarfile that don’t start with restrict_index.

duplicity.patchdir.patch_seq2ropath(patch_seq)[source]

Apply the patches in patch_seq, return single ropath

duplicity.patchdir.tarfiles2rop_iter(tarfile_list, restrict_index=())[source]

Integrate tarfiles of diffs into single ROPath iter

Then filter out all the diffs in that index which don’t start with the restrict_index.

duplicity.path module

Wrapper class around a file like “/usr/bin/env”

This class makes certain file operations more convenient and associates stat information with filenames

class duplicity.path.DupPath(base, index=(), parseresults=None)[source]

Bases: Path

Represent duplicity data files

Based on the file name, files that are compressed or encrypted will have different open() methods.

__init__(base, index=(), parseresults=None)[source]

DupPath initializer

The actual filename (no directory) must be the single element of the index, unless parseresults is given.

filtered_open(mode='rb', gpg_profile=None)[source]

Return fileobj with appropriate encryption/compression

If encryption is specified but no gpg_profile, use config.default_profile.

class duplicity.path.Path(base, index=())[source]

Bases: ROPath

Path class - wrapper around ordinary local files

Besides caching stat() results, this class organizes various file code.

__init__(base, index=())[source]

Path initializer

append(ext)[source]

Return new Path with ext added to index

chmod(mode)[source]

Change permissions of the path

compare_recursive(other, verbose=None)[source]

Compare self to other Path, descending down directories

contains(child)[source]

Return true if path is a directory and contains child

delete()[source]

Remove this file

deltree()[source]

Remove self by recursively deleting files under it

get_canonical()[source]

Return string of canonical version of path

Remove “.”, and trailing slashes where possible. Note that it’s harder to remove “..”, as “foo/bar/..” is not necessarily “foo”, so we can’t use path.normpath()

get_filename()[source]

Return filename of last component

get_parent_dir()[source]

Return directory that self is in

get_temp_in_same_dir()[source]

Return temp non existent path in same directory as self

isemptydir()[source]

Return true if path is a directory and is empty

listdir()[source]

Return list generated by os.listdir

makedev(type, major, minor)[source]

Make a device file with specified type, major/minor nums

mkdir()[source]

Make directory(s) at specified path

move(new_path)[source]

Like rename but destination may be on different file system

new_index(index)[source]

Return new Path with index index

open(mode='rb')[source]

Return fileobj associated with self

Usually this is just the file data on disk, but can be replaced with arbitrary data using the setfileobj method.

patch_with_attribs(diff_ropath)[source]

Patch self with diff and then copy attributes over

quote(s=None)[source]

Return quoted version of s (defaults to self.name)

The output is meant to be interpreted with shells, so can be used with os.system.

regex_chars_to_quote = re.compile('[\\\\\\"\\$`]')
rename(new_path)[source]

Rename file at current path to new_path.

rename_index(index)[source]
setdata()[source]

Refresh stat cache

touch()[source]

Open the file, write 0 bytes, close

unquote(s)[source]

Return unquoted version of string s, as quoted by above quote()

writefileobj(fin)[source]

Copy file object fin to self. Close both when done.

class duplicity.path.PathDeleter[source]

Bases: ITRBranch

Delete a directory. Called by Path.deltree

can_fast_process(index, path)[source]

True if object can be processed without new branch (stub)

end_process()[source]

Do any final processing before leaving branch (stub)

fast_process(index, path)[source]

Process args without new child branch (stub)

start_process(index, path)[source]

Do some initial processing (stub)

exception duplicity.path.PathException[source]

Bases: Exception

class duplicity.path.ROPath(index, stat=None)[source]

Bases: object

Read only Path

Objects of this class doesn’t represent real files, so they don’t have a name. They are required to be indexed though.

__init__(index, stat=None)[source]

ROPath initializer

blank()[source]

Black out self - set type and stat to None

compare_data(other)[source]

Compare data from two regular files, return true if same

compare_verbose(other, include_data=0)[source]

Compare ROPaths like __eq__, but log reason if different

This is placed in a separate function from __eq__ because __eq__ should be very time sensitive, and logging statements would slow it down. Used when verifying.

Only run if include_data is true.

copy(other)[source]

Copy self to other. Also copies data. Other must be Path

copy_attribs(other)[source]

Only copy attributes from self to other

exists()[source]

True if corresponding file exists

get_data()[source]

Return contents of associated fileobj in string

get_relative_path()[source]

Return relative path, created from index

get_ropath()[source]

Return ropath copy of self

get_tarinfo()[source]

Generate a tarfile.TarInfo object based on self

Doesn’t set size based on stat, because we may want to replace data wiht other stream. Size should be set separately by calling function.

getdevloc()[source]

Return device number path resides on

getmtime()[source]

Return mod time of path in seconds

getperms()[source]

Return permissions mode, owner and group

getsize()[source]

Return length in bytes from stat object

init_from_tarinfo(tarinfo)[source]

Set data from tarinfo object (part of tarfile module)

isdev()[source]

True is self is a device file

isdir()[source]

True if self is dir

isfifo()[source]

True if self is fifo

isreg()[source]

True if self corresponds to regular file

issock()[source]

True is self is socket

issym()[source]

True if self is sym

open(mode)[source]

Return fileobj associated with self

perms_equal(other)[source]

True if self and other have same permissions and ownership

set_from_stat()[source]

Set the value of self.type, self.mode from self.stat

setfileobj(fileobj)[source]

Set file object returned by open()

class duplicity.path.StatResult[source]

Bases: object

Used to emulate the output of os.stat() and related

st_mode = 0
duplicity.progress module

Functions to compute progress of compress & upload files The heuristics try to infer the ratio between the amount of data collected by the deltas and the total size of the changing files. It also infers the compression and encryption ration of the raw deltas before sending them to the backend. With the inferred ratios, the heuristics estimate the percentage of completion and the time left to transfer all the (yet unknown) amount of data to send. This is a forecast based on gathered evidence.

class duplicity.progress.LogProgressThread[source]

Bases: Thread

Background thread that reports progress to the log, every –progress-rate seconds

__init__()[source]

This constructor should always be called with keyword arguments. Arguments are:

group should be None; reserved for future extension when a ThreadGroup class is implemented.

target is the callable object to be invoked by the run() method. Defaults to None, meaning nothing is called.

name is the thread name. By default, a unique name is constructed of the form “Thread-N” where N is a small decimal number.

args is the argument tuple for the target invocation. Defaults to ().

kwargs is a dictionary of keyword arguments for the target invocation. Defaults to {}.

If a subclass overrides the constructor, it must make sure to invoke the base class constructor (Thread.__init__()) before doing anything else to the thread.

run()[source]

Method representing the thread’s activity.

You may override this method in a subclass. The standard run() method invokes the callable object passed to the object’s constructor as the target argument, if any, with sequential and keyword arguments taken from the args and kwargs arguments, respectively.

class duplicity.progress.ProgressTracker[source]

Bases: object

__init__()[source]
annotate_written_bytes(bytecount)[source]

Annotate the number of bytes that have been added/changed since last time this function was called. bytecount param will show the number of bytes since the start of the current volume and for the current volume

has_collected_evidence()[source]

Returns true if the progress computation is on and duplicity has not yet started the first dry-run pass to collect some information

log_upload_progress()[source]

Aproximative and evolving method of computing the progress of upload

set_evidence(stats, is_full)[source]

Stores the collected statistics from a first-pass dry-run, to use this information later so as to estimate progress

set_start_volume(volume)[source]
snapshot_progress(volume)[source]

Snapshots the current progress status for each volume into the disk cache If backup is interrupted, next restart will deserialize the data and try start progress from the snapshot

total_elapsed_seconds()[source]

Elapsed seconds since the first call to log_upload_progress method

class duplicity.progress.Snapshot(iterable=None, maxlen=10)[source]

Bases: deque

A convenience class for storing snapshots in a space/timing efficient manner Stores up to 10 consecutive progress snapshots, one for each volume

__init__(iterable=None, maxlen=10)[source]
clear()[source]

Remove all elements from the deque.

get_snapshot(volume)[source]
marshall()[source]

Serializes object to cache

pop_snapshot()[source]
push_snapshot(volume, snapshot_data)[source]
static unmarshall()[source]

De-serializes cached data it if present

duplicity.progress.report_transfer(bytecount, totalbytes)[source]

Method to call tracker.annotate_written_bytes from outside the class, and to offer the “function(long, long)” signature which is handy to pass as callback

duplicity.robust module
duplicity.robust.check_common_error(error_handler, function, args=())[source]

Apply function to args, if error, run error_handler on exception

This only catches certain exceptions which seem innocent enough.

duplicity.robust.listpath(path)[source]

Like path.listdir() but return [] if error, and sort results

duplicity.selection module
class duplicity.selection.Select(path)[source]

Bases: object

Iterate appropriate Paths in given directory

This class acts as an iterator on account of its next() method. Basically, it just goes through all the files in a directory in order (depth-first) and subjects each file to a bunch of tests (selection functions) in order. The first test that includes or excludes the file means that the file gets included (iterated) or excluded. The default is include, so with no tests we would just iterate all the files in the directory in order.

The one complication to this is that sometimes we don’t know whether or not to include a directory until we examine its contents. For instance, if we want to include all the **.py files. If /home/ben/foo.py exists, we should also include /home and /home/ben, but if these directories contain no **.py files, they shouldn’t be included. For this reason, a test may not include or exclude a directory, but merely “scan” it. If later a file in the directory gets included, so does the directory.

As mentioned above, each test takes the form of a selection function. The selection function takes a path, and returns:

None - means the test has nothing to say about the related file 0 - the file is excluded by the test 1 - the file is included 2 - the test says the file (must be directory) should be scanned

Also, a selection function f has a variable f.exclude which should be true if f could potentially exclude some file. This is used to signal an error if the last function only includes, which would be redundant and presumably isn’t what the user intends.

Iterate(path)[source]

Return iterator yielding paths in path

This function looks a bit more complicated than it needs to be because it avoids extra recursion (and no extra function calls for non-directory files) while still doing the “directory scanning” bit.

ParseArgs(argtuples, filelists)[source]

Create selection functions based on list of tuples

The tuples are created when the initial commandline arguments are read. They have the form (option string, additional argument) except for the filelist tuples, which should be (option-string, (additional argument, filelist_fp)).

Select(path)[source]

Run through the selection functions and return dominant val 0/1/2

__init__(path)[source]

Initializer, called with Path of root directory

__next__()[source]
add_selection_func(sel_func, add_to_start=None)[source]

Add another selection function at the end or beginning

devfiles_get_sf()[source]

Return a selection function to exclude all dev files

exclude_older_get_sf(date)[source]

Return selection function based on files older than modification date

filelist_general_get_sfs(filelist_fp, inc_default, list_name, mode='globbing', ignore_case=False)[source]

Return list of selection functions by reading fileobj

filelist_fp should be an open file object inc_default is true if this is an include list list_name is just the name of the list, used for logging mode indicates whether to glob, regex, or not

filelist_sanitise_line(line, include_default)[source]

Sanitises lines of both normal and globbing filelists, returning (line, include) and line=None if blank/comment

The aim is to parse filelists in a consistent way, prior to the interpretation of globbing statements. The function removes whitespace, comment lines and processes modifiers (leading +/-) and quotes.

general_get_sf(pattern_str, include, mode='globbing', ignore_case=False)[source]

Return selection function given by a pattern string

The selection patterns are interpretted in accordance with the mode argument, “globbing”, “literal”, or “regex”.

The ‘ignorecase:’ prefix is a legacy feature which historically lived on the globbing code path and was only ever documented as working for globs.

glob_get_sf(glob_str, include, ignore_case=False)[source]

Return selection function based on glob_str

literal_get_sf(lit_str, include, ignore_case=False)[source]

Return a selection function that matches a literal string while still including the contents of any folders which are matched

other_filesystems_get_sf(include)[source]

Return selection function matching files on other filesystems

parse_catch_error(exc)[source]

Deal with selection error exc

parse_files_from(filelist_fp, list_name)[source]

Loads an explicit list of files to backup from a filelist, building a dictionary of directories and their contents which can be used later to emulate a filesystem walk over the listed files only.

Each specified path is unwound to identify the parents folder(s) as these are implicitly to be included.

Paths read are not to be stripped, checked for comments, etc. Every character on each line is significant and treated as part of the path.

parse_last_excludes()[source]

Exit with error if last selection function isn’t an exclude

present_get_sf(filename, include)[source]

Return selection function given by existence of a file in a directory

regexp_get_sf(regexp_string, include, ignore_case=False)[source]

Return selection function given by regexp_string

select_fn_from_literal(lit_str, include, ignore_case=False)[source]

Return a function test_fn(path) which test where a path matches a literal string. See also select_fn_from_blog() in globmatch.py

This function is separated from literal_get_sf() so that it can be used to test the prefix without creating a loop.

TODO: this doesn’t need to be part of the Select class type, but not sure where else to put it?

set_iter()[source]

Initialize generator, prepare to iterate.

duplicity.statistics module

Generate and process backup statistics

class duplicity.statistics.StatsDeltaProcess[source]

Bases: StatsObj

Keep track of statistics during DirDelta process

__init__()[source]

StatsDeltaProcess initializer - zero file attributes

add_changed_file(path)[source]

Add stats of file that has changed since last backup

add_deleted_file(path)[source]

Add stats of file no longer in source directory

add_delta_entries_file(path, action_type)[source]
add_new_file(path)[source]

Add stats of new file path to statistics

add_unchanged_file(path)[source]

Add stats of file that hasn’t changed since last backup

close()[source]

End collection of data, set EndTime

get_delta_entries_file()[source]
exception duplicity.statistics.StatsException[source]

Bases: Exception

class duplicity.statistics.StatsObj[source]

Bases: object

Contains various statistics, provide string conversion functions

__init__()[source]

Set attributes to None

byte_abbrev_list = ((1099511627776, 'TB'), (1073741824, 'GB'), (1048576, 'MB'), (1024, 'KB'))
get_byte_summary_string(byte_count)[source]

Turn byte count into human readable string like “7.23GB”

get_filestats_string()[source]

Return portion of statistics string about files and bytes

get_miscstats_string()[source]

Return portion of extended stat string about misc attributes

get_stat(attribute)[source]

Get a statistic

get_stats_line(index, use_repr=1)[source]

Return one line abbreviated version of full stats string

get_stats_logstring(title)[source]

Like get_stats_string, but add header and footer

get_stats_string()[source]

Return extended string printing out statistics

get_statsobj_copy()[source]

Return new StatsObj object with same stats as self

get_timestats_string()[source]

Return portion of statistics string dealing with time

increment_stat(attr)[source]

Add 1 to value of attribute

read_stats_from_path(path)[source]

Set statistics from path, return self for convenience

set_stat(attr, value)[source]

Set attribute to given value

set_stats_from_line(line)[source]

Set statistics from given line

set_stats_from_string(s)[source]

Initialize attributes from string, return self for convenience

set_to_average(statobj_list)[source]

Set self’s attributes to average of those in statobj_list

space_regex = re.compile(' ')
stat_attrs = ('Filename', 'StartTime', 'EndTime', 'ElapsedTime', 'Errors', 'TotalDestinationSizeChange', 'SourceFiles', 'SourceFileSize', 'NewFiles', 'NewFileSize', 'DeletedFiles', 'ChangedFiles', 'ChangedFileSize', 'ChangedDeltaSize', 'DeltaEntries', 'RawDeltaSize')
stat_file_attrs = ('SourceFiles', 'SourceFileSize', 'NewFiles', 'NewFileSize', 'DeletedFiles', 'ChangedFiles', 'ChangedFileSize', 'ChangedDeltaSize', 'DeltaEntries', 'RawDeltaSize')
stat_file_pairs = (('SourceFiles', False), ('SourceFileSize', True), ('NewFiles', False), ('NewFileSize', True), ('DeletedFiles', False), ('ChangedFiles', False), ('ChangedFileSize', True), ('ChangedDeltaSize', True), ('DeltaEntries', False), ('RawDeltaSize', True))
stat_misc_attrs = ('Errors', 'TotalDestinationSizeChange')
stat_time_attrs = ('StartTime', 'EndTime', 'ElapsedTime')
stats_equal(s)[source]

Return true if s has same statistics as self

write_stats_to_path(path)[source]

Write statistics string to given path

duplicity.tarfile module

Like system tarfile but with caching.

duplicity.tempdir module

Provides temporary file handling cenetered around a single top-level securely created temporary directory.

The public interface of this module is thread-safe.

class duplicity.tempdir.TemporaryDirectory(temproot=None)[source]

Bases: object

A temporary directory.

An instance of this class is backed by a directory in the file system created securely by the use of tempfile.mkdtemp(). Said instance can be used to obtain unique filenames inside of this directory for cases where mktemp()-like semantics is desired, or (recommended) an fd,filename pair for mkstemp()-like semantics.

See further below for the security implications of using it.

Each instance will keep a list of all files ever created by it, to faciliate deletion of such files and rmdir() of the directory itself. It does this in order to be able to clean out the directory without resorting to a recursive delete (ala rm -rf), which would be risky. Calling code can optionally (recommended) notify an instance of the fact that a tempfile was deleted, and thus need not be kept track of anymore.

This class serves two primary purposes:

Firstly, it provides a convenient single top-level directory in which all the clutter ends up, rather than cluttering up the root of the system temp directory itself with many files.

Secondly, it provides a way to get mktemp() style semantics for temporary file creation, with most of the risks gone. Specifically, since the directory itself is created securely, files in this directory can be (mostly) safely created non-atomically without the usual mktemp() security implications. However, in the presence of tmpwatch, tmpreaper, or similar mechanisms that will cause files in the system tempdir to expire, a security risk is still present because the removal of the TemporaryDirectory managed directory removes all protection it offers.

For this reason, use of mkstemp() is greatly preferred above use of mktemp().

In addition, since cleanup is in the form of deletion based on a list of filenames, completely independently of whether someone else already deleted the file, there exists a race here as well. The impact should however be limited to the removal of an ‘attackers’ file.

__init__(temproot=None)[source]

Create a new TemporaryDirectory backed by a unique and securely created file system directory.

tempbase - The temp root directory, or None to use system default (recommended).

cleanup()[source]

Cleanup any files created in the temporary directory (that have not been forgotten), and clean up the temporary directory itself.

On failure they are logged, but this method will not raise an exception.

dir()[source]

Returns the absolute pathname of the temp folder.

forget(fname)[source]

Forget about the given filename previously obtained through mktemp() or mkstemp(). This should be called after the file has been deleted, to stop a future cleanup() from trying to delete it.

Forgetting is only needed for scaling purposes; that is, to avoid n timefile creations from implying that n filenames are kept in memory. Typically this whould never matter in duplicity, but for niceness sake callers are recommended to use this method whenever possible.

mkstemp()[source]

Returns a filedescriptor and a filename, as per os.mkstemp(), but located in the temporary directory and subject to tracking and automatic cleanup.

mkstemp_file()[source]

Convenience wrapper around mkstemp(), with the file descriptor converted into a file object.

mktemp()[source]

Return a unique filename suitable for use for a temporary file. The file is not created.

Subsequent calls to this method are guaranteed to never return the same filename again. As a result, it is safe to use under concurrent conditions.

NOTE: mkstemp() is greatly preferred.

duplicity.tempdir.default()[source]

Obtain the global default instance of TemporaryDirectory, creating it first if necessary. Failures are propagated to caller. Most callers are expected to use this function rather than instantiating TemporaryDirectory directly, unless they explicitly desdire to have their “own” directory for some reason.

This function is thread-safe.

duplicity.util module

Miscellaneous utilities.

class duplicity.util.BlackHoleList(iterable=(), /)[source]

Bases: list

append(x)[source]

Append object to the end of the list.

class duplicity.util.FakeTarFile[source]

Bases: object

close()[source]
debug = 0
duplicity.util.copyfileobj(infp, outfp, byte_count=-1)[source]

Copy byte_count bytes from infp to outfp, or all if byte_count < 0

Returns the number of bytes actually written (may be less than byte_count if find eof. Does not close either fileobj.

duplicity.util.csv_args_to_dict(arg)[source]

Given the string arg in single line csv format, split into pairs (key, val) and produce a dictionary from those key:val pairs.

duplicity.util.escape(string)[source]

Convert a (bytes) filename to a format suitable for logging (quoted utf8)

duplicity.util.exception_traceback(limit=50)[source]
@return A string representation in typical Python format of the

currently active/raised exception.

duplicity.util.get_tarinfo_name(ti)[source]
duplicity.util.ignore_missing(fn, filename)[source]

Execute fn on filename. Ignore ENOENT errors, otherwise raise exception.

@param fn: callable @param filename: string

duplicity.util.make_tarfile(mode, fp)[source]
duplicity.util.maybe_ignore_errors(fn)[source]

Execute fn. If the global configuration setting ignore_errors is set to True, catch errors and log them but do continue (and return None).

@param fn: A callable. @return Whatever fn returns when called, or None if it failed and ignore_errors is true.

duplicity.util.merge_dicts(*dict_args)[source]

Given any number of dictionaries, shallow copy and merge into a new dict, precedence goes to key-value pairs in latter dictionaries.

duplicity.util.release_lockfile()[source]
duplicity.util.start_debugger()[source]
duplicity.util.uexc(e)[source]

Returns the exception message in Unicode

duplicity.util.uindex(index)[source]

Convert an index (a tuple of path parts) to unicode for printing

duplicity.util.which(program)[source]

Return absolute path for program name. Returns None if program not found.

Module contents

testing package

Subpackages

testing.functional package
Submodules
testing.functional.test_badupload module
testing.functional.test_cleanup module
testing.functional.test_final module
testing.functional.test_log module
testing.functional.test_rdiffdir module
testing.functional.test_restart module
testing.functional.test_selection module
testing.functional.test_verify module
Module contents
testing.unit package
Submodules
testing.unit.test_backend module
testing.unit.test_backend_instance module
testing.unit.test_cli_main module
testing.unit.test_collections module
testing.unit.test_diffdir module
testing.unit.test_dup_temp module
testing.unit.test_dup_time module
testing.unit.test_file_naming module
testing.unit.test_globmatch module
testing.unit.test_gpg module
testing.unit.test_gpginterface module
testing.unit.test_lazy module
testing.unit.test_manifest module
testing.unit.test_patchdir module
testing.unit.test_path module
testing.unit.test_selection module
testing.unit.test_statistics module
testing.unit.test_tarfile module
testing.unit.test_tempdir module
testing.unit.test_util module
Module contents

Submodules

testing.conftest module
testing.test_code module

Module contents

Indices and tables