rclone(1) rclone(1)

About rclone
What can rclone do for you?
What features does rclone have?
What providers does rclone support?
Download (https://rclone.org/downloads/)
Install (https://rclone.org/install/)
Donate. (https://rclone.org/donate/)

Rclone is a command-line program to manage files on cloud storage. It is a feature-rich alternative to cloud vendors' web storage interfaces. Over 40 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols.

Rclone has powerful cloud equivalents to the unix commands rsync, cp, mv, mount, ls, ncdu, tree, rm, and cat. Rclone's familiar syntax includes shell pipeline support, and --dry-run protection. It is used at the command line, in scripts or via its API.

Users call rclone "The Swiss army knife of cloud storage", and "Technology indistinguishable from magic".

Rclone really looks after your data. It preserves timestamps and verifies checksums at all times. Transfers over limited bandwidth; intermittent connections, or subject to quota can be restarted, from the last good file transferred. You can check (https://rclone.org/commands/rclone_check/) the integrity of your files. Where possible, rclone employs server-side transfers to minimise local bandwidth use and transfers from one provider to another without using local disk.

Virtual backends wrap local and cloud file systems to apply encryption (https://rclone.org/crypt/), compression (https://rclone.org/compress/), chunking (https://rclone.org/chunker/), hashing (https://rclone.org/hasher/) and joining (https://rclone.org/union/).

Rclone mounts (https://rclone.org/commands/rclone_mount/) any local, cloud or virtual filesystem as a disk on Windows, macOS, linux and FreeBSD, and also serves these over SFTP (https://rclone.org/commands/rclone_serve_sftp/), HTTP (https://rclone.org/commands/rclone_serve_http/), WebDAV (https://rclone.org/commands/rclone_serve_webdav/), FTP (https://rclone.org/commands/rclone_serve_ftp/) and DLNA (https://rclone.org/commands/rclone_serve_dlna/).

Rclone is mature, open-source software originally inspired by rsync and written in Go (https://golang.org). The friendly support community is familiar with varied use cases. Official Ubuntu, Debian, Fedora, Brew and Chocolatey repos. include rclone. For the latest version downloading from rclone.org (https://rclone.org/downloads/) is recommended.

Rclone is widely used on Linux, Windows and Mac. Third-party developers create innovative backup, restore, GUI and business process solutions using the rclone command line or API.

Rclone does the heavy lifting of communicating with cloud storage.

Rclone helps you:

Backup (and encrypt) files to cloud storage
Restore (and decrypt) files from cloud storage
Mirror cloud data to other cloud services or locally
Migrate data to the cloud, or between cloud storage vendors
Mount multiple, encrypted, cached or diverse cloud storage as a disk
Analyse and account for data held on cloud storage using lsf (https://rclone.org/commands/rclone_lsf/), ljson (https://rclone.org/commands/rclone_lsjson/), size (https://rclone.org/commands/rclone_size/), ncdu (https://rclone.org/commands/rclone_ncdu/)
Union (https://rclone.org/union/) file systems together to present multiple local and/or cloud file systems as one

Transfers
MD5, SHA1 hashes are checked at all times for file integrity
Timestamps are preserved on files
Operations can be restarted at any time
Can be to and from network, e.g. two different cloud providers
Can use multi-threaded downloads to local disk
Copy (https://rclone.org/commands/rclone_copy/) new or changed files to cloud storage
Sync (https://rclone.org/commands/rclone_sync/) (one way) to make a directory identical
Move (https://rclone.org/commands/rclone_move/) files to cloud storage deleting the local after verification
Check (https://rclone.org/commands/rclone_check/) hashes and for missing/extra files
Mount (https://rclone.org/commands/rclone_mount/) your cloud storage as a network disk
Serve (https://rclone.org/commands/rclone_serve/) local or remote files over HTTP (https://rclone.org/commands/rclone_serve_http/)/WebDav (https://rclone.org/commands/rclone_serve_webdav/)/FTP (https://rclone.org/commands/rclone_serve_ftp/)/SFTP (https://rclone.org/commands/rclone_serve_sftp/)/DLNA (https://rclone.org/commands/rclone_serve_dlna/)
Experimental Web based GUI (https://rclone.org/gui/)

(There are many others, built on standard protocols such as WebDAV or S3, that work out of the box.)

1Fichier
Akamai Netstorage
Alibaba Cloud (Aliyun) Object Storage System (OSS)
Amazon Drive
Amazon S3
Backblaze B2
Box
Ceph
China Mobile Ecloud Elastic Object Storage (EOS)
Arvan Cloud Object Storage (AOS)
Citrix ShareFile
C14
Cloudflare R2
DigitalOcean Spaces
Digi Storage
Dreamhost
Dropbox
Enterprise File Fabric
FTP
Google Cloud Storage
Google Drive
Google Photos
HDFS
Hetzner Storage Box
HiDrive
HTTP
Hubic
Internet Archive
Jottacloud
IBM COS S3
IDrive e2
Koofr
Mail.ru Cloud
Memset Memstore
Mega
Memory
Microsoft Azure Blob Storage
Microsoft OneDrive
Minio
Nextcloud
OVH
OpenDrive
OpenStack Swift
Oracle Cloud Storage
ownCloud
pCloud
premiumize.me
put.io
QingStor
Rackspace Cloud Files
rsync.net
Scaleway
Seafile
Seagate Lyve Cloud
SeaweedFS
SFTP
Sia
StackPath
Storj
SugarSync
Tencent Cloud Object Storage (COS)
Uptobox
Wasabi
WebDAV
Yandex Disk
Zoho WorkDrive
The local filesystem

These backends adapt or modify other storage providers:

Alias: Rename existing remotes
Cache: Cache remotes (DEPRECATED)
Chunker: Split large files
Combine: Combine multiple remotes into a directory tree
Compress: Compress files
Crypt: Encrypt files
Hasher: Hash files
Union: Join multiple remotes to work together
Home page (https://rclone.org/)
GitHub project page for source and bug tracker (https://github.com/rclone/rclone)
Rclone Forum (https://forum.rclone.org)
Downloads (https://rclone.org/downloads/)

Rclone is a Go program and comes as a single binary file.

Download (https://rclone.org/downloads/) the relevant binary.
Extract the rclone executable, rclone.exe on Windows, from the archive.
Run rclone config to setup. See rclone config docs (https://rclone.org/docs/) for more details.
Optionally configure automatic execution.

See below for some expanded Linux / macOS instructions.

See the usage (https://rclone.org/docs/) docs for how to use rclone, or run rclone -h.

Already installed rclone can be easily updated to the latest version using the rclone selfupdate (https://rclone.org/commands/rclone_selfupdate/) command.

To install rclone on Linux/macOS/BSD systems, run:

sudo -v ; curl https://rclone.org/install.sh | sudo bash

For beta installation, run:

sudo -v ; curl https://rclone.org/install.sh | sudo bash -s beta

Note that this script checks the version of rclone installed first and won't re-download if not needed.

Fetch and unpack

curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip
unzip rclone-current-linux-amd64.zip
cd rclone-*-linux-amd64

Copy binary file

sudo cp rclone /usr/bin/
sudo chown root:root /usr/bin/rclone
sudo chmod 755 /usr/bin/rclone

Install manpage

sudo mkdir -p /usr/local/share/man/man1
sudo cp rclone.1 /usr/local/share/man/man1/
sudo mandb

Run rclone config to setup. See rclone config docs (https://rclone.org/docs/) for more details.

rclone config

brew install rclone

NOTE: This version of rclone will not support mount any more (see #5373 (https://github.com/rclone/rclone/issues/5373)). If mounting is wanted on macOS, either install a precompiled binary or enable the relevant option when installing from source.

To avoid problems with macOS gatekeeper enforcing the binary to be signed and notarized it is enough to download with curl.

Download the latest version of rclone.

cd && curl -O https://downloads.rclone.org/rclone-current-osx-amd64.zip

Unzip the download and cd to the extracted folder.

unzip -a rclone-current-osx-amd64.zip && cd rclone-*-osx-amd64

Move rclone to your $PATH. You will be prompted for your password.

sudo mkdir -p /usr/local/bin
sudo mv rclone /usr/local/bin/

(the mkdir command is safe to run, even if the directory already exists).

Remove the leftover files.

cd .. && rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip

Run rclone config to setup. See rclone config docs (https://rclone.org/docs/) for more details.

rclone config

When downloading a binary with a web browser, the browser will set the macOS gatekeeper quarantine attribute. Starting from Catalina, when attempting to run rclone, a pop-up will appear saying:

"rclone" cannot be opened because the developer cannot be verified.
macOS cannot verify that this app is free from malware.

The simplest fix is to run

xattr -d com.apple.quarantine rclone

The rclone maintains a docker image for rclone (https://hub.docker.com/r/rclone/rclone). These images are autobuilt by docker hub from the rclone source based on a minimal Alpine linux image.

The :latest tag will always point to the latest stable release. You can use the :beta tag to get the latest build from master. You can also use version tags, e.g. :1.49.1, :1.49 or :1.

$ docker pull rclone/rclone:latest
latest: Pulling from rclone/rclone
Digest: sha256:0e0ced72671989bb837fea8e88578b3fc48371aa45d209663683e24cfdaa0e11
...
$ docker run --rm rclone/rclone:latest version
rclone v1.49.1
- os/arch: linux/amd64
- go version: go1.12.9

There are a few command line options to consider when starting an rclone Docker container from the rclone image.

You need to mount the host rclone config dir at /config/rclone into the Docker container. Due to the fact that rclone updates tokens inside its config file, and that the update process involves a file rename, you need to mount the whole host rclone config dir, not just the single host rclone config file.
You need to mount a host data dir at /data into the Docker container.
By default, the rclone binary inside a Docker container runs with UID=0 (root). As a result, all files created in a run will have UID=0. If your config and data files reside on the host with a non-root UID:GID, you need to pass these on the container start command line.
If you want to access the RC interface (either via the API or the Web UI), it is required to set the --rc-addr to :5572 in order to connect to it from outside the container. An explanation about why this is necessary is present here (https://web.archive.org/web/20200808071950/https://pythonspeed.com/articles/docker-connection-refused/).
NOTE: Users running this container with the docker network set to host should probably set it to listen to localhost only, with 127.0.0.1:5572 as the value for --rc-addr
It is possible to use rclone mount inside a userspace Docker container, and expose the resulting fuse mount to the host. The exact docker run options to do that might vary slightly between hosts. See, e.g. the discussion in this thread (https://github.com/moby/moby/issues/9448).

You also need to mount the host /etc/passwd and /etc/group for fuse to work inside the container.

Here are some commands tested on an Ubuntu 18.04.3 host:

# config on host at ~/.config/rclone/rclone.conf
# data on host at ~/data
# make sure the config is ok by listing the remotes
docker run --rm \

--volume ~/.config/rclone:/config/rclone \
--volume ~/data:/data:shared \
--user $(id -u):$(id -g) \
rclone/rclone \
listremotes # perform mount inside Docker container, expose result to host mkdir -p ~/data/mount docker run --rm \
--volume ~/.config/rclone:/config/rclone \
--volume ~/data:/data:shared \
--user $(id -u):$(id -g) \
--volume /etc/passwd:/etc/passwd:ro --volume /etc/group:/etc/group:ro \
--device /dev/fuse --cap-add SYS_ADMIN --security-opt apparmor:unconfined \
rclone/rclone \
mount dropbox:Photos /data/mount & ls ~/data/mount kill %1

Make sure you have git and Go (https://golang.org/) installed. Go version 1.16 or newer is required, latest release is recommended. You can get it from your package manager, or download it from golang.org/dl (https://golang.org/dl/). Then you can run the following:

git clone https://github.com/rclone/rclone.git
cd rclone
go build

This will check out the rclone source in subfolder rclone, which you can later modify and send pull requests with. Then it will build the rclone executable in the same folder. As an initial check you can now run ./rclone version (.\rclone version on Windows).

Note that on macOS and Windows the mount (https://rclone.org/commands/rclone_mount/) command will not be available unless you specify additional build tag cmount.

go build -tags cmount

This assumes you have a GCC compatible C compiler (GCC or Clang) in your PATH, as it uses cgo (https://pkg.go.dev/cmd/cgo). But on Windows, the cgofuse (https://github.com/winfsp/cgofuse) library that the cmount implementation is based on, also supports building without cgo (https://github.com/golang/go/wiki/WindowsDLLs), i.e. by setting environment variable CGO_ENABLED to value 0 (static linking). This is how the official Windows release of rclone is being built, starting with version 1.59. It is still possible to build with cgo on Windows as well, by using the MinGW port of GCC, e.g. by installing it in a MSYS2 (https://www.msys2.org) distribution (make sure you install it in the classic mingw64 subsystem, the ucrt64 version is not compatible).

Additionally, on Windows, you must install the third party utility WinFsp (http://www.secfs.net/winfsp/), with the "Developer" feature selected. If building with cgo, you must also set environment variable CPATH pointing to the fuse include directory within the WinFsp installation (normally C:\Program Files (x86)\WinFsp\inc\fuse).

You may also add arguments -ldflags -s (with or without -tags cmount), to omit symbol table and debug information, making the executable file smaller, and -trimpath to remove references to local file system paths. This is how the official rclone releases are built.

go build -trimpath -ldflags -s -tags cmount

Instead of executing the go build command directly, you can run it via the Makefile, which also sets version information and copies the resulting rclone executable into your GOPATH bin folder ($(go env GOPATH)/bin, which corresponds to ~/go/bin/rclone by default).

make

To include mount command on macOS and Windows with Makefile build:

make GOTAGS=cmount

As an alternative you can download the source, build and install rclone in one operation, as a regular Go package. The source will be stored it in the Go module cache, and the resulting executable will be in your GOPATH bin folder ($(go env GOPATH)/bin, which corresponds to ~/go/bin/rclone by default).

With Go version 1.17 or newer:

go install github.com/rclone/rclone@latest

With Go versions older than 1.17 (do not use the -u flag, it causes Go to try to update the dependencies that rclone uses and sometimes these don't work with the current version):

go get github.com/rclone/rclone

This can be done with Stefan Weichinger's ansible role (https://github.com/stefangweichinger/ansible-rclone).

Instructions

1.
git clone https://github.com/stefangweichinger/ansible-rclone.git into your local roles-directory
2.
add the role to the hosts you want rclone installed to:

- hosts: rclone-hosts
roles:
- rclone

As mentioned above (https://rclone.org/install/#quickstart), rclone is single executable (rclone, or rclone.exe on Windows) that you can download as a zip archive and extract into a location of your choosing. When executing different commands, it may create files in different locations, such as a configuration file and various temporary files. By default the locations for these are according to your operating system, e.g. configuration file in your user profile directory and temporary files in the standard temporary directory, but you can customize all of them, e.g. to make a completely self-contained, portable installation.

Run the config paths (https://rclone.org/commands/rclone_config_paths/) command to see the locations that rclone will use.

To override them set the corresponding options (as command-line arguments, or as environment variables (https://rclone.org/docs/#environment-variables)): - --config (https://rclone.org/docs/#config-config-file) - --cache-dir (https://rclone.org/docs/#cache-dir-dir) - --temp-dir (https://rclone.org/docs/#temp-dir-dir)

After installing and configuring rclone, as described above, you are ready to use rclone as an interactive command line utility. If your goal is to perform periodic operations, such as a regular sync (https://rclone.org/commands/rclone_sync/), you will probably want to configure your rclone command in your operating system's scheduler. If you need to expose service-like features, such as remote control (https://rclone.org/rc/), GUI (https://rclone.org/gui/), serve (https://rclone.org/commands/rclone_serve/) or mount (https://rclone.org/commands/rclone_mount/), you will often want an rclone command always running in the background, and configuring it to run in a service infrastructure may be a better option. Below are some alternatives on how to achieve this on different operating systems.

NOTE: Before setting up autorun it is highly recommended that you have tested your command manually from a Command Prompt first.

The most relevant alternatives for autostart on Windows are: - Run at user log on using the Startup folder - Run at user log on, at system startup or at schedule using Task Scheduler - Run at system startup using Windows service

Rclone is a console application, so if not starting from an existing Command Prompt, e.g. when starting rclone.exe from a shortcut, it will open a Command Prompt window. When configuring rclone to run from task scheduler and windows service you are able to set it to run hidden in background. From rclone version 1.54 you can also make it run hidden from anywhere by adding option --no-console (it may still flash briefly when the program starts). Since rclone normally writes information and any error messages to the console, you must redirect this to a file to be able to see it. Rclone has a built-in option --log-file for that.

Example command to run a sync in background:

c:\rclone\rclone.exe sync c:\files remote:/files --no-console --log-file c:\rclone\logs\sync_files.txt

As mentioned in the mount (https://rclone.org/commands/rclone_mount/) documentation, mounted drives created as Administrator are not visible to other accounts, not even the account that was elevated as Administrator. By running the mount command as the built-in SYSTEM user account, it will create drives accessible for everyone on the system. Both scheduled task and Windows service can be used to achieve this.

NOTE: Remember that when rclone runs as the SYSTEM user, the user profile that it sees will not be yours. This means that if you normally run rclone with configuration file in the default location, to be able to use the same configuration when running as the system user you must explicitely tell rclone where to find it with the --config (https://rclone.org/docs/#config-config-file) option, or else it will look in the system users profile path (C:\Windows\System32\config\systemprofile). To test your command manually from a Command Prompt, you can run it with the PsExec (https://docs.microsoft.com/en-us/sysinternals/downloads/psexec) utility from Microsoft's Sysinternals suite, which takes option -s to execute commands as the SYSTEM user.

To quickly execute an rclone command you can simply create a standard Windows Explorer shortcut for the complete rclone command you want to run. If you store this shortcut in the special "Startup" start-menu folder, Windows will automatically run it at login. To open this folder in Windows Explorer, enter path %APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup, or C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp if you want the command to start for every user that logs in.

This is the easiest approach to autostarting of rclone, but it offers no functionality to set it to run as different user, or to set conditions or actions on certain events. Setting up a scheduled task as described below will often give you better results.

Task Scheduler is an administrative tool built into Windows, and it can be used to configure rclone to be started automatically in a highly configurable way, e.g. periodically on a schedule, on user log on, or at system startup. It can run be configured to run as the current user, or for a mount command that needs to be available to all users it can run as the SYSTEM user. For technical information, see https://docs.microsoft.com/windows/win32/taskschd/task-scheduler-start-page.

For running rclone at system startup, you can create a Windows service that executes your rclone command, as an alternative to scheduled task configured to run at startup.

For mount commands, rclone has a built-in Windows service integration via the third-party WinFsp library it uses. Registering as a regular Windows service easy, as you just have to execute the built-in PowerShell command New-Service (requires administrative privileges).

Example of a PowerShell command that creates a Windows service for mounting some remote:/files as drive letter X:, for all users (service will be running as the local system account):

New-Service -Name Rclone -BinaryPathName 'c:\rclone\rclone.exe mount remote:/files X: --config c:\rclone\config\rclone.conf --log-file c:\rclone\logs\mount.txt'

The WinFsp service infrastructure (https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture) supports incorporating services for file system implementations, such as rclone, into its own launcher service, as kind of "child services". This has the additional advantage that it also implements a network provider that integrates into Windows standard methods for managing network drives. This is currently not officially supported by Rclone, but with WinFsp version 2019.3 B2 / v1.5B2 or later it should be possible through path rewriting as described here (https://github.com/rclone/rclone/issues/3340).

To Windows service running any rclone command, the excellent third-party utility NSSM (http://nssm.cc), the "Non-Sucking Service Manager", can be used. It includes some advanced features such as adjusting process periority, defining process environment variables, redirect to file anything written to stdout, and customized response to different exit codes, with a GUI to configure everything from (although it can also be used from command line ).

There are also several other alternatives. To mention one more, WinSW (https://github.com/winsw/winsw), "Windows Service Wrapper", is worth checking out. It requires .NET Framework, but it is preinstalled on newer versions of Windows, and it also provides alternative standalone distributions which includes necessary runtime (.NET 5). WinSW is a command-line only utility, where you have to manually create an XML file with service configuration. This may be a drawback for some, but it can also be an advantage as it is easy to back up and re-use the configuration settings, without having go through manual steps in a GUI. One thing to note is that by default it does not restart the service on error, one have to explicit enable this in the configuration file (via the "onfailure" parameter).

To always run rclone in background, relevant for mount commands etc, you can use systemd to set up rclone as a system or user service. Running as a system service ensures that it is run at startup even if the user it is running as has no active session. Running rclone as a user service ensures that it only starts after the configured user has logged into the system.

To run a periodic command, such as a copy/sync, you can set up a cron job.

Rclone is a command line program to manage files on cloud storage. After download (https://rclone.org/downloads/) and install, continue here to learn how to use it: Initial configuration, what the basic syntax looks like, describes the various subcommands, the various options, and more.

First, you'll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file. (See the --config entry for how to find the config file and choose its location.)

The easiest way to make the config is to run rclone with the config option:

rclone config

See the following for detailed instructions for

1Fichier (https://rclone.org/fichier/)
Akamai Netstorage (https://rclone.org/netstorage/)
Alias (https://rclone.org/alias/)
Amazon Drive (https://rclone.org/amazonclouddrive/)
Amazon S3 (https://rclone.org/s3/)
Backblaze B2 (https://rclone.org/b2/)
Box (https://rclone.org/box/)
Chunker (https://rclone.org/chunker/) - transparently splits large files for other remotes
Citrix ShareFile (https://rclone.org/sharefile/)
Compress (https://rclone.org/compress/)
Combine (https://rclone.org/combine/)
Crypt (https://rclone.org/crypt/) - to encrypt other remotes
DigitalOcean Spaces (https://rclone.org/s3/#digitalocean-spaces)
Digi Storage (https://rclone.org/koofr/#digi-storage)
Dropbox (https://rclone.org/dropbox/)
Enterprise File Fabric (https://rclone.org/filefabric/)
FTP (https://rclone.org/ftp/)
Google Cloud Storage (https://rclone.org/googlecloudstorage/)
Google Drive (https://rclone.org/drive/)
Google Photos (https://rclone.org/googlephotos/)
Hasher (https://rclone.org/hasher/) - to handle checksums for other remotes
HDFS (https://rclone.org/hdfs/)
HiDrive (https://rclone.org/hidrive/)
HTTP (https://rclone.org/http/)
Hubic (https://rclone.org/hubic/)
Internet Archive (https://rclone.org/internetarchive/)
Jottacloud (https://rclone.org/jottacloud/)
Koofr (https://rclone.org/koofr/)
Mail.ru Cloud (https://rclone.org/mailru/)
Mega (https://rclone.org/mega/)
Memory (https://rclone.org/memory/)
Microsoft Azure Blob Storage (https://rclone.org/azureblob/)
Microsoft OneDrive (https://rclone.org/onedrive/)
OpenStack Swift / Rackspace Cloudfiles / Memset Memstore (https://rclone.org/swift/)
OpenDrive (https://rclone.org/opendrive/)
Pcloud (https://rclone.org/pcloud/)
premiumize.me (https://rclone.org/premiumizeme/)
put.io (https://rclone.org/putio/)
QingStor (https://rclone.org/qingstor/)
Seafile (https://rclone.org/seafile/)
SFTP (https://rclone.org/sftp/)
Sia (https://rclone.org/sia/)
Storj (https://rclone.org/storj/)
SugarSync (https://rclone.org/sugarsync/)
Union (https://rclone.org/union/)
Uptobox (https://rclone.org/uptobox/)
WebDAV (https://rclone.org/webdav/)
Yandex Disk (https://rclone.org/yandex/)
Zoho WorkDrive (https://rclone.org/zoho/)
The local filesystem (https://rclone.org/local/)

Rclone syncs a directory tree from one storage system to another.

Its syntax is like this

Syntax: [options] subcommand <parameters> <parameters...>

Source and destination paths are specified by the name you gave the storage system in the config file then the sub path, e.g. "drive:myfolder" to look at "myfolder" in Google drive.

You can define as many storage paths as you like in the config file.

Please use the -i / --interactive flag while learning rclone to avoid accidental data loss.

rclone uses a system of subcommands. For example

rclone ls remote:path # lists a remote
rclone copy /local/path remote:path # copies /local/path to the remote
rclone sync -i /local/path remote:path # syncs /local/path to the remote

Enter an interactive configuration session.

Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.

rclone config [flags]


-h, --help help for config

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.
rclone config create (https://rclone.org/commands/rclone_config_create/) - Create a new remote with name, type and options.
rclone config delete (https://rclone.org/commands/rclone_config_delete/) - Delete an existing remote.
rclone config disconnect (https://rclone.org/commands/rclone_config_disconnect/) - Disconnects user from remote
rclone config dump (https://rclone.org/commands/rclone_config_dump/) - Dump the config file as JSON.
rclone config file (https://rclone.org/commands/rclone_config_file/) - Show path of configuration file in use.
rclone config password (https://rclone.org/commands/rclone_config_password/) - Update password in an existing remote.
rclone config paths (https://rclone.org/commands/rclone_config_paths/) - Show paths used for configuration, cache, temp etc.
rclone config providers (https://rclone.org/commands/rclone_config_providers/) - List in JSON format all the providers and options.
rclone config reconnect (https://rclone.org/commands/rclone_config_reconnect/) - Re-authenticates user with remote.
rclone config show (https://rclone.org/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote.
rclone config touch (https://rclone.org/commands/rclone_config_touch/) - Ensure configuration file exists.
rclone config update (https://rclone.org/commands/rclone_config_update/) - Update options in an existing remote.
rclone config userinfo (https://rclone.org/commands/rclone_config_userinfo/) - Prints info about logged in user of remote.

Copy files from source to dest, skipping identical files.

Copy the source to the destination. Does not transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Doesn't delete files from the destination. If you want to also delete files from destination, to make it match source, use the sync (https://rclone.org/commands/rclone_sync/) command instead.

Note that it is always the contents of the directory that is synced, not the directory itself. So when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents.

To copy single files, use the copyto (https://rclone.org/commands/rclone_copyto/) command instead.

If dest:path doesn't exist, it is created and the source:path contents go there.

For example

rclone copy source:sourcepath dest:destpath

Let's say there are two files in sourcepath

sourcepath/one.txt
sourcepath/two.txt

This copies them to

destpath/one.txt
destpath/two.txt

Not to

destpath/sourcepath/one.txt
destpath/sourcepath/two.txt

If you are familiar with rsync, rclone always works as if you had written a trailing / - meaning "copy the contents of this directory". This applies to all commands and whether you are talking about the source or destination.

See the --no-traverse (https://rclone.org/docs/#no-traverse) option for controlling whether rclone lists the destination directory or not. Supplying this option when copying a small number of files into a large destination can speed transfers up greatly.

For example, if you have many files in /path/to/src but only a few of them change every day, you can copy all the files which have changed recently very efficiently like this:

rclone copy --max-age 24h --no-traverse /path/to/src remote:

Note: Use the -P/--progress flag to view real-time transfer statistics.

Note: Use the --dry-run or the --interactive/-i flag to test without copying anything.

rclone copy source:path dest:path [flags]


--create-empty-src-dirs Create empty source dirs on destination after copy
-h, --help help for copy

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Make source and dest identical, modifying destination only.

Sync the source to the destination, changing the destination only. Doesn't transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary (except duplicate objects, see below). If you don't want to delete files from destination, use the copy (https://rclone.org/commands/rclone_copy/) command instead.

Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.

rclone sync -i SOURCE remote:DESTINATION

Note that files in the destination won't be deleted if there were any errors at any point. Duplicate objects (files with the same name, on those providers that support it) are also not yet handled.

It is always the contents of the directory that is synced, not the directory itself. So when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the copy (https://rclone.org/commands/rclone_copy/) command if unsure.

If dest:path doesn't exist, it is created and the source:path contents go there.

It is not possible to sync overlapping remotes. However, you may exclude the destination from the sync with a filter rule or by putting an exclude-if-present file inside the destination directory and sync to a destination that is inside the source directory.

Note: Use the -P/--progress flag to view real-time transfer statistics

Note: Use the rclone dedupe command to deal with "Duplicate object/directory found in source/destination - ignoring" errors. See this forum post (https://forum.rclone.org/t/sync-not-clearing-duplicates/14372) for more info.

rclone sync source:path dest:path [flags]


--create-empty-src-dirs Create empty source dirs on destination after sync
-h, --help help for sync

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Move files from source to dest.

Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap and the remote does not support a server-side directory move operation.

To move single files, use the moveto (https://rclone.org/commands/rclone_moveto/) command instead.

If no filters are in use and if possible this will server-side move source:path into dest:path. After this source:path will no longer exist.

Otherwise for each file in source:path selected by the filters (if any) this will move it into dest:path. If possible a server-side move will be used, otherwise it will copy it (server-side if possible) into dest:path then delete the original (if no errors on copy) in source:path.

If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag.

See the --no-traverse (https://rclone.org/docs/#no-traverse) option for controlling whether rclone lists the destination directory or not. Supplying this option when moving a small number of files into a large destination can speed transfers up greatly.

Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.

Note: Use the -P/--progress flag to view real-time transfer statistics.

rclone move source:path dest:path [flags]


--create-empty-src-dirs Create empty source dirs on destination after move
--delete-empty-src-dirs Delete empty source dirs after move
-h, --help help for move

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Remove the files in path.

Remove the files in path. Unlike purge (https://rclone.org/commands/rclone_purge/) it obeys include/exclude filters so can be used to selectively delete files.

rclone delete only deletes files but leaves the directory structure alone. If you want to delete a directory and all of its contents use the purge (https://rclone.org/commands/rclone_purge/) command.

If you supply the --rmdirs flag, it will remove all empty directories along with it. You can also use the separate command rmdir (https://rclone.org/commands/rclone_rmdir/) or rmdirs (https://rclone.org/commands/rclone_rmdirs/) to delete empty directories only.

For example, to delete all files bigger than 100 MiB, you may first want to check what would be deleted (use either):

rclone --min-size 100M lsl remote:path
rclone --dry-run --min-size 100M delete remote:path

Then proceed with the actual delete:

rclone --min-size 100M delete remote:path

That reads "delete everything with a minimum size of 100 MiB", hence delete all files bigger than 100 MiB.

Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.

rclone delete remote:path [flags]


-h, --help help for delete
--rmdirs rmdirs removes empty directories but leaves root intact

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Remove the path and all of its contents.

Remove the path and all of its contents. Note that this does not obey include/exclude filters - everything will be removed. Use the delete (https://rclone.org/commands/rclone_delete/) command if you want to selectively delete files. To delete empty directories only, use command rmdir (https://rclone.org/commands/rclone_rmdir/) or rmdirs (https://rclone.org/commands/rclone_rmdirs/).

Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.

rclone purge remote:path [flags]


-h, --help help for purge

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Make the path if it doesn't already exist.

rclone mkdir remote:path [flags]


-h, --help help for mkdir

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Remove the empty directory at path.

This removes empty directory given by path. Will not remove the path if it has any objects in it, not even empty subdirectories. Use command rmdirs (https://rclone.org/commands/rclone_rmdirs/) (or delete (https://rclone.org/commands/rclone_delete/) with option --rmdirs) to do that.

To delete a path and any objects in it, use purge (https://rclone.org/commands/rclone_purge/) command.

rclone rmdir remote:path [flags]


-h, --help help for rmdir

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Checks the files in the source and destination match.

Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files that don't match. It doesn't alter the source or destination.

For the crypt (https://rclone.org/crypt/) remote there is a dedicated command, cryptcheck (https://rclone.org/commands/rclone_cryptcheck/), that are able to check the checksums of the crypted files.

If you supply the --size-only flag, it will only compare the sizes not the hashes as well. Use this for a quick check.

If you supply the --download flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.

If you supply the --checkfile HASH flag with a valid hash name, the source:path must point to a text file in the SUM format.

If you supply the --one-way flag, it will only check that files in the source match the files in the destination, not the other way around. This means that extra files in the destination that are not in the source will not be detected.

The --differ, --missing-on-dst, --missing-on-src, --match and --error flags write paths, one per line, to the file name (or stdout if it is -) supplied. What they write is described in the help below. For example --differ will write all paths which are present on both the source and destination but different.

The --combined flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell you what happened to it. These are reminiscent of diff files.

= path means path was found in source and destination and was identical
`- path` means path was missing on the source, so only in the destination
`+ path` means path was missing on the destination, so only in the source
`* path` means path was present in source and destination but different.
! path means there was an error reading or hashing the source or dest.
rclone check source:path dest:path [flags]


-C, --checkfile string Treat source:path as a SUM file with hashes of given type
--combined string Make a combined report of changes to this file
--differ string Report all non-matching files to this file
--download Check by downloading rather than with hash
--error string Report all files with errors (hashing or reading) to this file
-h, --help help for check
--match string Report all matching files to this file
--missing-on-dst string Report all files missing from the destination to this file
--missing-on-src string Report all files missing from the source to this file
--one-way Check one way only, source files must exist on remote

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

List the objects in the path with size and path.

Lists the objects in the source path to standard output in a human readable format with size and path. Recurses by default.

Eg

$ rclone ls swift:bucket

60295 bevajer5jef
90613 canole
94467 diwogej7
37600 fubuwic

Any of the filtering options can be applied to this command.

There are several related list commands

ls to list size and path of objects only
lsl to list modification time, size and path of objects only
lsd to list directories only
lsf to list objects and directories in easy to parse format
lsjson to list objects and directories in JSON format

ls,lsl,lsd are designed to be human-readable. lsf is designed to be human and machine-readable. lsjson is designed to be machine-readable.

Note that ls and lsl recurse by default - use --max-depth 1 to stop the recursion.

The other list commands lsd,lsf,lsjson do not recurse by default - use -R to make them recurse.

Listing a non-existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).

rclone ls remote:path [flags]


-h, --help help for ls

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

List all directories/containers/buckets in the path.

Lists the directories in the source path to standard output. Does not recurse by default. Use the -R flag to recurse.

This command lists the total size of the directory (if known, -1 if not), the modification time (if known, the current time if not), the number of objects in the directory (if known, -1 if not) and the name of the directory, Eg

$ rclone lsd swift:

494000 2018-04-26 08:43:20 10000 10000files
65 2018-04-26 08:43:20 1 1File

Or

$ rclone lsd drive:test

-1 2016-10-17 17:41:53 -1 1000files
-1 2017-01-03 14:40:54 -1 2500files
-1 2017-07-08 14:39:28 -1 4000files

If you just want the directory names use rclone lsf --dirs-only.

Any of the filtering options can be applied to this command.

There are several related list commands

ls to list size and path of objects only
lsl to list modification time, size and path of objects only
lsd to list directories only
lsf to list objects and directories in easy to parse format
lsjson to list objects and directories in JSON format

ls,lsl,lsd are designed to be human-readable. lsf is designed to be human and machine-readable. lsjson is designed to be machine-readable.

Note that ls and lsl recurse by default - use --max-depth 1 to stop the recursion.

The other list commands lsd,lsf,lsjson do not recurse by default - use -R to make them recurse.

Listing a non-existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).

rclone lsd remote:path [flags]


-h, --help help for lsd
-R, --recursive Recurse into the listing

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

List the objects in path with modification time, size and path.

Lists the objects in the source path to standard output in a human readable format with modification time, size and path. Recurses by default.

Eg

$ rclone lsl swift:bucket

60295 2016-06-25 18:55:41.062626927 bevajer5jef
90613 2016-06-25 18:55:43.302607074 canole
94467 2016-06-25 18:55:43.046609333 diwogej7
37600 2016-06-25 18:55:40.814629136 fubuwic

Any of the filtering options can be applied to this command.

There are several related list commands

ls to list size and path of objects only
lsl to list modification time, size and path of objects only
lsd to list directories only
lsf to list objects and directories in easy to parse format
lsjson to list objects and directories in JSON format

ls,lsl,lsd are designed to be human-readable. lsf is designed to be human and machine-readable. lsjson is designed to be machine-readable.

Note that ls and lsl recurse by default - use --max-depth 1 to stop the recursion.

The other list commands lsd,lsf,lsjson do not recurse by default - use -R to make them recurse.

Listing a non-existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).

rclone lsl remote:path [flags]


-h, --help help for lsl

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Produces an md5sum file for all the objects in the path.

Produces an md5sum file for all the objects in the path. This is in the same format as the standard md5sum tool produces.

By default, the hash is requested from the remote. If MD5 is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling MD5 for any remote.

For other algorithms, see the hashsum (https://rclone.org/commands/rclone_hashsum/) command. Running rclone md5sum remote:path is equivalent to running rclone hashsum MD5 remote:path.

This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, as a relative path).

rclone md5sum remote:path [flags]


--base64 Output base64 encoded hashsum
-C, --checkfile string Validate hashes against a given SUM file instead of printing them
--download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
-h, --help help for md5sum
--output-file string Output hashsums to a file rather than the terminal

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Produces an sha1sum file for all the objects in the path.

Produces an sha1sum file for all the objects in the path. This is in the same format as the standard sha1sum tool produces.

By default, the hash is requested from the remote. If SHA-1 is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling SHA-1 for any remote.

For other algorithms, see the hashsum (https://rclone.org/commands/rclone_hashsum/) command. Running rclone sha1sum remote:path is equivalent to running rclone hashsum SHA1 remote:path.

This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, as a relative path).

This command can also hash data received on STDIN, if not passing a remote:path.

rclone sha1sum remote:path [flags]


--base64 Output base64 encoded hashsum
-C, --checkfile string Validate hashes against a given SUM file instead of printing them
--download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
-h, --help help for sha1sum
--output-file string Output hashsums to a file rather than the terminal

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Prints the total size and number of objects in remote:path.

Counts objects in the path and calculates the total size. Prints the result to standard output.

By default the output is in human-readable format, but shows values in both human-readable format as well as the raw numbers (global option --human-readable is not considered). Use option --json to format output as JSON instead.

Recurses by default, use --max-depth 1 to stop the recursion.

Some backends do not always provide file sizes, see for example Google Photos (https://rclone.org/googlephotos/#size) and Google Drive (https://rclone.org/drive/#limitations-of-google-docs). Rclone will then show a notice in the log indicating how many such files were encountered, and count them in as empty files in the output of the size command.

rclone size remote:path [flags]


-h, --help help for size
--json Format output as JSON

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Show the version number.

Show the rclone version number, the go version, the build target OS and architecture, the runtime OS and kernel version and bitness, build tags and the type of executable (static or dynamic).

For example:

$ rclone version
rclone v1.55.0
- os/version: ubuntu 18.04 (64 bit)
- os/kernel: 4.15.0-136-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.16
- go/linking: static
- go/tags: none

Note: before rclone version 1.55 the os/type and os/arch lines were merged, and the "go/version" line was tagged as "go version".

If you supply the --check flag, then it will do an online check to compare your version with the latest release and the latest beta.

$ rclone version --check
yours:  1.42.0.6
latest: 1.42          (released 2018-06-16)
beta:   1.42.0.5      (released 2018-06-17)

Or

$ rclone version --check
yours:  1.41
latest: 1.42          (released 2018-06-16)

upgrade: https://downloads.rclone.org/v1.42 beta: 1.42.0.5 (released 2018-06-17)
upgrade: https://beta.rclone.org/v1.42-005-g56e1e820
rclone version [flags]


--check Check for new version
-h, --help help for version

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Clean up the remote if possible.

Clean up the remote if possible. Empty the trash or delete old file versions. Not supported by all remotes.

rclone cleanup remote:path [flags]


-h, --help help for cleanup

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Interactively find duplicate filenames and delete/rename them.

By default dedupe interactively finds files with duplicate names and offers to delete all but one or rename them to be different. This is known as deduping by name.

Deduping by name is only useful with a small group of backends (e.g. Google Drive, Opendrive) that can have duplicate file names. It can be run on wrapping backends (e.g. crypt) if they wrap a backend which supports duplicate file names.

However if --by-hash is passed in then dedupe will find files with duplicate hashes instead which will work on any backend which supports at least one hash. This can be used to find files with duplicate content. This is known as deduping by hash.

If deduping by name, first rclone will merge directories with the same name. It will do this iteratively until all the identically named directories have been merged.

Next, if deduping by name, for every group of duplicate file names / hashes, it will delete all but one identical file it finds without confirmation. This means that for most duplicated files the dedupe command will not be interactive.

dedupe considers files to be identical if they have the same file path and the same hash. If the backend does not support hashes (e.g. crypt wrapping Google Drive) then they will never be found to be identical. If you use the --size-only flag then files will be considered identical if they have the same size (any hash will be ignored). This can be useful on crypt backends which do not support hashes.

Next rclone will resolve the remaining duplicates. Exactly which action is taken depends on the dedupe mode. By default, rclone will interactively query the user for each one.

Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.

Here is an example run.

Before - with duplicates

$ rclone lsl drive:dupes

6048320 2016-03-05 16:23:16.798000000 one.txt
6048320 2016-03-05 16:23:11.775000000 one.txt
564374 2016-03-05 16:23:06.731000000 one.txt
6048320 2016-03-05 16:18:26.092000000 one.txt
6048320 2016-03-05 16:22:46.185000000 two.txt
1744073 2016-03-05 16:22:38.104000000 two.txt
564374 2016-03-05 16:22:52.118000000 two.txt

Now the dedupe session

$ rclone dedupe drive:dupes
2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
one.txt: Found 4 files with duplicate names
one.txt: Deleting 2/3 identical duplicates (MD5 "1eedaa9fe86fd4b8632e2ac549403b36")
one.txt: 2 duplicates remain

1: 6048320 bytes, 2016-03-05 16:23:16.798000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36
2: 564374 bytes, 2016-03-05 16:23:06.731000000, MD5 7594e7dc9fc28f727c42ee3e0749de81 s) Skip and do nothing k) Keep just one (choose which in next step) r) Rename all to be different (by changing file.jpg to file-1.jpg) s/k/r> k Enter the number of the file to keep> 1 one.txt: Deleted 1 extra copies two.txt: Found 3 files with duplicate names two.txt: 3 duplicates remain
1: 564374 bytes, 2016-03-05 16:22:52.118000000, MD5 7594e7dc9fc28f727c42ee3e0749de81
2: 6048320 bytes, 2016-03-05 16:22:46.185000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36
3: 1744073 bytes, 2016-03-05 16:22:38.104000000, MD5 851957f7fb6f0bc4ce76be966d336802 s) Skip and do nothing k) Keep just one (choose which in next step) r) Rename all to be different (by changing file.jpg to file-1.jpg) s/k/r> r two-1.txt: renamed from: two.txt two-2.txt: renamed from: two.txt two-3.txt: renamed from: two.txt

The result being

$ rclone lsl drive:dupes

6048320 2016-03-05 16:23:16.798000000 one.txt
564374 2016-03-05 16:22:52.118000000 two-1.txt
6048320 2016-03-05 16:22:46.185000000 two-2.txt
1744073 2016-03-05 16:22:38.104000000 two-3.txt

Dedupe can be run non interactively using the --dedupe-mode flag or by using an extra parameter with the same value

--dedupe-mode interactive - interactive as above.
--dedupe-mode skip - removes identical files then skips anything left.
--dedupe-mode first - removes identical files then keeps the first one.
--dedupe-mode newest - removes identical files then keeps the newest one.
--dedupe-mode oldest - removes identical files then keeps the oldest one.
--dedupe-mode largest - removes identical files then keeps the largest one.
--dedupe-mode smallest - removes identical files then keeps the smallest one.
--dedupe-mode rename - removes identical files then renames the rest to be different.
--dedupe-mode list - lists duplicate dirs and files only and changes nothing.

For example, to rename all the identically named photos in your Google Photos directory, do

rclone dedupe --dedupe-mode rename "drive:Google Photos"

Or

rclone dedupe rename "drive:Google Photos"
rclone dedupe [mode] remote:path [flags]


--by-hash Find identical hashes rather than names
--dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename (default "interactive")
-h, --help help for dedupe

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Get quota information from the remote.

rclone about prints quota information about a remote to standard output. The output is typically used, free, quota and trash contents.

E.g. Typical output from rclone about remote: is:

Total:   17 GiB
Used:    7.444 GiB
Free:    1.315 GiB
Trashed: 100.000 MiB
Other:   8.241 GiB

Where the fields are:

Total: Total size available.
Used: Total size used.
Free: Total space available to this user.
Trashed: Total space used by trash.
Other: Total amount in other storage (e.g. Gmail, Google Photos).
Objects: Total number of objects in the storage.

All sizes are in number of bytes.

Applying a --full flag to the command prints the bytes in full, e.g.

Total:   18253611008
Used:    7993453766
Free:    1411001220
Trashed: 104857602
Other:   8849156022

A --json flag generates conveniently machine-readable output, e.g.

{

"total": 18253611008,
"used": 7993453766,
"trashed": 104857602,
"other": 8849156022,
"free": 1411001220 }

Not all backends print all fields. Information is not included if it is not provided by a backend. Where the value is unlimited it is omitted.

Some backends does not support the rclone about command at all, see complete list in documentation (https://rclone.org/overview/#optional-features).

rclone about remote: [flags]


--full Full numbers instead of human-readable
-h, --help help for about
--json Format output as JSON

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Remote authorization.

Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config.

Use the --auth-no-open-browser to prevent rclone to open auth link in default browser automatically.

rclone authorize [flags]


--auth-no-open-browser Do not automatically open auth link in default browser
-h, --help help for authorize

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Run a backend-specific command.

This runs a backend-specific command. The commands themselves (except for "help" and "features") are defined by the backends and you should see the backend docs for definitions.

You can discover what commands a backend implements by using

rclone backend help remote:
rclone backend help <backendname>

You can also discover information about the backend using (see operations/fsinfo (https://rclone.org/rc/#operations-fsinfo) in the remote control docs for more info).

rclone backend features remote:

Pass options to the backend command with -o. This should be key=value or key, e.g.:

rclone backend stats remote:path stats -o format=json -o long

Pass arguments to the backend by placing them on the end of the line

rclone backend cleanup remote:path file1 file2 file3

Note to run these commands on a running backend then see backend/command (https://rclone.org/rc/#backend-command) in the rc docs.

rclone backend <command> remote:path [opts] <args> [flags]


-h, --help help for backend
--json Always output in JSON format
-o, --option stringArray Option in the form name=value or name

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Perform bidirectonal synchronization between two paths.

Perform bidirectonal synchronization between two paths.

Bisync (https://rclone.org/bisync/) provides a bidirectional cloud sync solution in rclone. It retains the Path1 and Path2 filesystem listings from the prior run. On each successive run it will: - list files on Path1 and Path2, and check for changes on each side. Changes include New, Newer, Older, and Deleted files. - Propagate changes on Path1 to Path2, and vice-versa.

See full bisync description (https://rclone.org/bisync/) for details.

rclone bisync remote1:path1 remote2:path2 [flags]


--check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort.
--check-filename string Filename for --check-access (default: RCLONE_TEST)
--check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true")
--filters-file string Read filtering patterns from a file
--force Bypass --max-delete safety check and run the sync. Consider using with --verbose
-h, --help help for bisync
--localtime Use local time in listings (default: UTC)
--no-cleanup Retain working files (useful for troubleshooting and testing).
--remove-empty-dirs Remove empty directories at the final cleanup step.
-1, --resync Performs the resync run. Path1 files may overwrite Path2 versions. Consider using --verbose or --dry-run first.
--workdir string Use custom working dir - useful for testing. (default: $HOME/.cache/rclone/bisync)

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Concatenates any files and sends them to stdout.

rclone cat sends any files to standard output.

You can use it like this to output a single file

rclone cat remote:path/to/file

Or like this to output any file in dir or its subdirectories.

rclone cat remote:path/to/dir

Or like this to output any .txt files in dir or its subdirectories.

rclone --include "*.txt" cat remote:path/to/dir

Use the --head flag to print characters only at the start, --tail for the end and --offset and --count to print a section in the middle. Note that if offset is negative it will count from the end, so --offset -1 --count 1 is equivalent to --tail 1.

rclone cat remote:path [flags]


--count int Only print N characters (default -1)
--discard Discard the output instead of printing
--head int Only print the first N characters
-h, --help help for cat
--offset int Start printing at offset N (or from end if -ve)
--tail int Only print the last N characters

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Checks the files in the source against a SUM file.

Checks that hashsums of source files match the SUM file. It compares hashes (MD5, SHA1, etc) and logs a report of files which don't match. It doesn't alter the file system.

If you supply the --download flag, it will download the data from remote and calculate the contents hash on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.

Note that hash values in the SUM file are treated as case insensitive.

If you supply the --one-way flag, it will only check that files in the source match the files in the destination, not the other way around. This means that extra files in the destination that are not in the source will not be detected.

The --differ, --missing-on-dst, --missing-on-src, --match and --error flags write paths, one per line, to the file name (or stdout if it is -) supplied. What they write is described in the help below. For example --differ will write all paths which are present on both the source and destination but different.

The --combined flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell you what happened to it. These are reminiscent of diff files.

= path means path was found in source and destination and was identical
`- path` means path was missing on the source, so only in the destination
`+ path` means path was missing on the destination, so only in the source
`* path` means path was present in source and destination but different.
! path means there was an error reading or hashing the source or dest.
rclone checksum <hash> sumfile src:path [flags]


--combined string Make a combined report of changes to this file
--differ string Report all non-matching files to this file
--download Check by hashing the contents
--error string Report all files with errors (hashing or reading) to this file
-h, --help help for checksum
--match string Report all matching files to this file
--missing-on-dst string Report all files missing from the destination to this file
--missing-on-src string Report all files missing from the source to this file
--one-way Check one way only, source files must exist on remote

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Generate the autocompletion script for the specified shell

Generate the autocompletion script for rclone for the specified shell. See each sub-command's help for details on how to use the generated script.


-h, --help help for completion

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.
rclone completion bash (https://rclone.org/commands/rclone_completion_bash/) - Generate the autocompletion script for bash
rclone completion fish (https://rclone.org/commands/rclone_completion_fish/) - Generate the autocompletion script for fish
rclone completion powershell (https://rclone.org/commands/rclone_completion_powershell/) - Generate the autocompletion script for powershell
rclone completion zsh (https://rclone.org/commands/rclone_completion_zsh/) - Generate the autocompletion script for zsh

Generate the autocompletion script for bash

Generate the autocompletion script for the bash shell.

This script depends on the 'bash-completion' package. If it is not installed already, you can install it via your OS's package manager.

To load completions in your current shell session:

source <(rclone completion bash)

To load completions for every new session, execute once:

rclone completion bash > /etc/bash_completion.d/rclone

rclone completion bash > /usr/local/etc/bash_completion.d/rclone

You will need to start a new shell for this setup to take effect.

rclone completion bash


-h, --help help for bash
--no-descriptions disable completion descriptions

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone completion (https://rclone.org/commands/rclone_completion/) - Generate the autocompletion script for the specified shell

Generate the autocompletion script for fish

Generate the autocompletion script for the fish shell.

To load completions in your current shell session:

rclone completion fish | source

To load completions for every new session, execute once:

rclone completion fish > ~/.config/fish/completions/rclone.fish

You will need to start a new shell for this setup to take effect.

rclone completion fish [flags]


-h, --help help for fish
--no-descriptions disable completion descriptions

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone completion (https://rclone.org/commands/rclone_completion/) - Generate the autocompletion script for the specified shell

Generate the autocompletion script for powershell

Generate the autocompletion script for powershell.

To load completions in your current shell session:

rclone completion powershell | Out-String | Invoke-Expression

To load completions for every new session, add the output of the above command to your powershell profile.

rclone completion powershell [flags]


-h, --help help for powershell
--no-descriptions disable completion descriptions

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone completion (https://rclone.org/commands/rclone_completion/) - Generate the autocompletion script for the specified shell

Generate the autocompletion script for zsh

Generate the autocompletion script for the zsh shell.

If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once:

echo "autoload -U compinit; compinit" >> ~/.zshrc

To load completions for every new session, execute once:

rclone completion zsh > "${fpath[1]}/_rclone"

rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone

You will need to start a new shell for this setup to take effect.

rclone completion zsh [flags]


-h, --help help for zsh
--no-descriptions disable completion descriptions

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone completion (https://rclone.org/commands/rclone_completion/) - Generate the autocompletion script for the specified shell

Create a new remote with name, type and options.

Create a new remote of name with type and options. The options should be passed in pairs of key value or as key=value.

For example, to make a swift remote of name myremote using auto config you would do:

rclone config create myremote swift env_auth true
rclone config create myremote swift env_auth=true

So for example if you wanted to configure a Google Drive remote but using remote authorization you would do this:

rclone config create mydrive drive config_is_local=false

Note that if the config process would normally ask a question the default is taken (unless --non-interactive is used). Each time that happens rclone will print or DEBUG a message saying how to affect the value taken.

If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren't already obscured before putting them in the config file.

NB If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the --obscure flag, or if you are 100% certain you are already passing obscured passwords then use --no-obscure. You can also set obscured passwords using the rclone config password command.

The flag --non-interactive is for use by applications that wish to configure rclone themeselves, rather than using rclone's text based configuration questions. If this flag is set, and rclone needs to ask the user a question, a JSON blob will be returned with the question in it.

This will look something like (some irrelevant detail removed):

{

"State": "*oauth-islocal,teamdrive,,",
"Option": {
"Name": "config_is_local",
"Help": "Use auto config?\n * Say Y if not sure\n * Say N if you are working on a remote or headless machine\n",
"Default": true,
"Examples": [
{
"Value": "true",
"Help": "Yes"
},
{
"Value": "false",
"Help": "No"
}
],
"Required": false,
"IsPassword": false,
"Type": "bool",
"Exclusive": true,
},
"Error": "", }

The format of Option is the same as returned by rclone config providers. The question should be asked to the user and returned to rclone as the --result option along with the --state parameter.

The keys of Option are used as follows:

Name - name of variable - show to user
Help - help text. Hard wrapped at 80 chars. Any URLs should be clicky.
Default - default value - return this if the user just wants the default.
Examples - the user should be able to choose one of these
Required - the value should be non-empty
IsPassword - the value is a password and should be edited as such
Type - type of value, eg bool, string, int and others
Exclusive - if set no free-form entry allowed only the Examples
Irrelevant keys Provider, ShortOpt, Hide, NoPrefix, Advanced

If Error is set then it should be shown to the user at the same time as the question.

rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"

Note that when using --continue all passwords should be passed in the clear (not obscured). Any default config values should be passed in with each invocation of --continue.

At the end of the non interactive process, rclone will return a result with State as empty string.

If --all is passed then rclone will ask all the config questions, not just the post config questions. Any parameters are used as defaults for questions as usual.

Note that bin/config.py in the rclone source implements this protocol as a readable demonstration.

rclone config create name type [key value]* [flags]


--all Ask the full set of config questions
--continue Continue the configuration process with an answer
-h, --help help for create
--no-obscure Force any passwords not to be obscured
--non-interactive Don't interact with user and return questions
--obscure Force any passwords to be obscured
--result string Result - use with --continue
--state string State - use with --continue

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session.

Delete an existing remote.

rclone config delete name [flags]


-h, --help help for delete

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session.

Disconnects user from remote

This disconnects the remote: passed in to the cloud storage system.

This normally means revoking the oauth token.

To reconnect use "rclone config reconnect".

rclone config disconnect remote: [flags]


-h, --help help for disconnect

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session.

Dump the config file as JSON.

rclone config dump [flags]


-h, --help help for dump

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session.

Enter an interactive configuration session.

Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.

rclone config edit [flags]


-h, --help help for edit

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session.

Show path of configuration file in use.

rclone config file [flags]


-h, --help help for file

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session.

Update password in an existing remote.

Update an existing remote's password. The password should be passed in pairs of key password or as key=password. The password should be passed in in clear (unobscured).

For example, to set password of a remote of name myremote you would do:

rclone config password myremote fieldname mypassword
rclone config password myremote fieldname=mypassword

This command is obsolete now that "config update" and "config create" both support obscuring passwords directly.

rclone config password name [key value]+ [flags]


-h, --help help for password

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session.

Show paths used for configuration, cache, temp etc.

rclone config paths [flags]


-h, --help help for paths

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session.

List in JSON format all the providers and options.

rclone config providers [flags]


-h, --help help for providers

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session.

Re-authenticates user with remote.

This reconnects remote: passed in to the cloud storage system.

To disconnect the remote use "rclone config disconnect".

This normally means going through the interactive oauth flow again.

rclone config reconnect remote: [flags]


-h, --help help for reconnect

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session.

Print (decrypted) config file, or the config for a single remote.

rclone config show [<remote>] [flags]


-h, --help help for show

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session.

Ensure configuration file exists.

rclone config touch [flags]


-h, --help help for touch

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session.

Update options in an existing remote.

Update an existing remote's options. The options should be passed in pairs of key value or as key=value.

For example, to update the env_auth field of a remote of name myremote you would do:

rclone config update myremote env_auth true
rclone config update myremote env_auth=true

If the remote uses OAuth the token will be updated, if you don't require this add an extra parameter thus:

rclone config update myremote env_auth=true config_refresh_token=false

Note that if the config process would normally ask a question the default is taken (unless --non-interactive is used). Each time that happens rclone will print or DEBUG a message saying how to affect the value taken.

If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren't already obscured before putting them in the config file.

NB If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the --obscure flag, or if you are 100% certain you are already passing obscured passwords then use --no-obscure. You can also set obscured passwords using the rclone config password command.

The flag --non-interactive is for use by applications that wish to configure rclone themeselves, rather than using rclone's text based configuration questions. If this flag is set, and rclone needs to ask the user a question, a JSON blob will be returned with the question in it.

This will look something like (some irrelevant detail removed):

{

"State": "*oauth-islocal,teamdrive,,",
"Option": {
"Name": "config_is_local",
"Help": "Use auto config?\n * Say Y if not sure\n * Say N if you are working on a remote or headless machine\n",
"Default": true,
"Examples": [
{
"Value": "true",
"Help": "Yes"
},
{
"Value": "false",
"Help": "No"
}
],
"Required": false,
"IsPassword": false,
"Type": "bool",
"Exclusive": true,
},
"Error": "", }

The format of Option is the same as returned by rclone config providers. The question should be asked to the user and returned to rclone as the --result option along with the --state parameter.

The keys of Option are used as follows:

Name - name of variable - show to user
Help - help text. Hard wrapped at 80 chars. Any URLs should be clicky.
Default - default value - return this if the user just wants the default.
Examples - the user should be able to choose one of these
Required - the value should be non-empty
IsPassword - the value is a password and should be edited as such
Type - type of value, eg bool, string, int and others
Exclusive - if set no free-form entry allowed only the Examples
Irrelevant keys Provider, ShortOpt, Hide, NoPrefix, Advanced

If Error is set then it should be shown to the user at the same time as the question.

rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"

Note that when using --continue all passwords should be passed in the clear (not obscured). Any default config values should be passed in with each invocation of --continue.

At the end of the non interactive process, rclone will return a result with State as empty string.

If --all is passed then rclone will ask all the config questions, not just the post config questions. Any parameters are used as defaults for questions as usual.

Note that bin/config.py in the rclone source implements this protocol as a readable demonstration.

rclone config update name [key value]+ [flags]


--all Ask the full set of config questions
--continue Continue the configuration process with an answer
-h, --help help for update
--no-obscure Force any passwords not to be obscured
--non-interactive Don't interact with user and return questions
--obscure Force any passwords to be obscured
--result string Result - use with --continue
--state string State - use with --continue

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session.

Prints info about logged in user of remote.

This prints the details of the person logged in to the cloud storage system.

rclone config userinfo remote: [flags]


-h, --help help for userinfo
--json Format output as JSON

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session.

Copy files from source to dest, skipping identical files.

If source:path is a file or directory then it copies it to a file or directory named dest:path.

This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy (https://rclone.org/commands/rclone_copy/) command.

So

rclone copyto src dst

where src and dst are rclone paths, either remote:path or /path/to/local or C:.

This will:

if src is file

copy it to dst, overwriting an existing file if it exists if src is directory
copy it to dst, overwriting existing files if they exist
see copy command for full details

This doesn't transfer files that are identical on src and dst, testing by size and modification time or MD5SUM. It doesn't delete files from the destination.

Note: Use the -P/--progress flag to view real-time transfer statistics

rclone copyto source:path dest:path [flags]


-h, --help help for copyto

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Copy url content to dest.

Download a URL's content and copy it to the destination without saving it in temporary storage.

Setting --auto-filename will attempt to automatically determine the filename from the URL (after any redirections) and used in the destination path. With --auto-filename-header in addition, if a specific filename is set in HTTP headers, it will be used instead of the name from the URL. With --print-filename in addition, the resulting file name will be printed.

Setting --no-clobber will prevent overwriting file on the destination if there is one with the same name.

Setting --stdout or making the output file name - will cause the output to be written to standard output.

rclone copyurl https://example.com dest:path [flags]


-a, --auto-filename Get the file name from the URL and use it for destination file path
--header-filename Get the file name from the Content-Disposition header
-h, --help help for copyurl
--no-clobber Prevent overwriting file with same name
-p, --print-filename Print the resulting name from --auto-filename
--stdout Write the output to stdout rather than a file

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Cryptcheck checks the integrity of a crypted remote.

rclone cryptcheck checks a remote against a crypted (https://rclone.org/crypt/) remote. This is the equivalent of running rclone check (https://rclone.org/commands/rclone_check/), but able to check the checksums of the crypted remote.

For it to work the underlying remote of the cryptedremote must support some kind of checksum.

It works by reading the nonce from each file on the cryptedremote: and using that to encrypt each file on the remote:. It then checks the checksum of the underlying file on the cryptedremote: against the checksum of the file it has just encrypted.

Use it like this

rclone cryptcheck /path/to/files encryptedremote:path

You can use it like this also, but that will involve downloading all the files in remote:path.

rclone cryptcheck remote:path encryptedremote:path

After it has run it will log the status of the encryptedremote:.

If you supply the --one-way flag, it will only check that files in the source match the files in the destination, not the other way around. This means that extra files in the destination that are not in the source will not be detected.

The --differ, --missing-on-dst, --missing-on-src, --match and --error flags write paths, one per line, to the file name (or stdout if it is -) supplied. What they write is described in the help below. For example --differ will write all paths which are present on both the source and destination but different.

The --combined flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell you what happened to it. These are reminiscent of diff files.

= path means path was found in source and destination and was identical
`- path` means path was missing on the source, so only in the destination
`+ path` means path was missing on the destination, so only in the source
`* path` means path was present in source and destination but different.
! path means there was an error reading or hashing the source or dest.
rclone cryptcheck remote:path cryptedremote:path [flags]


--combined string Make a combined report of changes to this file
--differ string Report all non-matching files to this file
--error string Report all files with errors (hashing or reading) to this file
-h, --help help for cryptcheck
--match string Report all matching files to this file
--missing-on-dst string Report all files missing from the destination to this file
--missing-on-src string Report all files missing from the source to this file
--one-way Check one way only, source files must exist on remote

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Cryptdecode returns unencrypted file names.

rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items.

If you supply the --reverse flag, it will return encrypted file names.

use it like this

rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
rclone cryptdecode --reverse encryptedremote: filename1 filename2

Another way to accomplish this is by using the rclone backend encode (or decode) command. See the documentation on the crypt (https://rclone.org/crypt/) overlay for more info.

rclone cryptdecode encryptedremote: encryptedfilename [flags]


-h, --help help for cryptdecode
--reverse Reverse cryptdecode, encrypts filenames

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Remove a single file from remote.

Remove a single file from remote. Unlike delete it cannot be used to remove a directory and it doesn't obey include/exclude filters - if the specified file exists, it will always be removed.

rclone deletefile remote:path [flags]


-h, --help help for deletefile

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Output completion script for a given shell.

Generates a shell completion script for rclone. Run with --help to list the supported shells.


-h, --help help for genautocomplete

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.
rclone genautocomplete bash (https://rclone.org/commands/rclone_genautocomplete_bash/) - Output bash completion script for rclone.
rclone genautocomplete fish (https://rclone.org/commands/rclone_genautocomplete_fish/) - Output fish completion script for rclone.
rclone genautocomplete zsh (https://rclone.org/commands/rclone_genautocomplete_zsh/) - Output zsh completion script for rclone.

Output bash completion script for rclone.

Generates a bash shell autocompletion script for rclone.

This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, e.g.

sudo rclone genautocomplete bash

Logout and login again to use the autocompletion scripts, or source them directly

. /etc/bash_completion

If you supply a command line argument the script will be written there.

If output_file is "-", then the output will be written to stdout.

rclone genautocomplete bash [output_file] [flags]


-h, --help help for bash

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone genautocomplete (https://rclone.org/commands/rclone_genautocomplete/) - Output completion script for a given shell.

Output fish completion script for rclone.

Generates a fish autocompletion script for rclone.

This writes to /etc/fish/completions/rclone.fish by default so will probably need to be run with sudo or as root, e.g.

sudo rclone genautocomplete fish

Logout and login again to use the autocompletion scripts, or source them directly

. /etc/fish/completions/rclone.fish

If you supply a command line argument the script will be written there.

If output_file is "-", then the output will be written to stdout.

rclone genautocomplete fish [output_file] [flags]


-h, --help help for fish

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone genautocomplete (https://rclone.org/commands/rclone_genautocomplete/) - Output completion script for a given shell.

Output zsh completion script for rclone.

Generates a zsh autocompletion script for rclone.

This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, e.g.

sudo rclone genautocomplete zsh

Logout and login again to use the autocompletion scripts, or source them directly

autoload -U compinit && compinit

If you supply a command line argument the script will be written there.

If output_file is "-", then the output will be written to stdout.

rclone genautocomplete zsh [output_file] [flags]


-h, --help help for zsh

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone genautocomplete (https://rclone.org/commands/rclone_genautocomplete/) - Output completion script for a given shell.

Output markdown docs for rclone to the directory supplied.

This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.

rclone gendocs output_directory [flags]


-h, --help help for gendocs

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Produces a hashsum file for all the objects in the path.

Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool.

By default, the hash is requested from the remote. If the hash is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling any hash for any remote.

For the MD5 and SHA1 algorithms there are also dedicated commands, md5sum (https://rclone.org/commands/rclone_md5sum/) and sha1sum (https://rclone.org/commands/rclone_sha1sum/).

This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, as a relative path).

Run without a hash to see the list of all supported hashes, e.g.

$ rclone hashsum
Supported hashes are:

* md5
* sha1
* whirlpool
* crc32
* sha256
* dropbox
* hidrive
* mailru
* quickxor

Then

$ rclone hashsum MD5 remote:path

Note that hash names are case insensitive and values are output in lower case.

rclone hashsum <hash> remote:path [flags]


--base64 Output base64 encoded hashsum
-C, --checkfile string Validate hashes against a given SUM file instead of printing them
--download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
-h, --help help for hashsum
--output-file string Output hashsums to a file rather than the terminal

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Generate public link to file/folder.

rclone link will create, retrieve or remove a public link to the given file or folder.

rclone link remote:path/to/file
rclone link remote:path/to/folder/
rclone link --unlink remote:path/to/folder/
rclone link --expire 1d remote:path/to/file

If you supply the --expire flag, it will set the expiration time otherwise it will use the default (100 years). Note not all backends support the --expire flag - if the backend doesn't support it then the link returned won't expire.

Use the --unlink flag to remove existing public links to the file or folder. Note not all backends support "--unlink" flag - those that don't will just ignore it.

If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always by default be created with the least constraints – e.g. no expiry, no password protection, accessible without account.

rclone link remote:path [flags]


--expire Duration The amount of time that the link will be valid (default off)
-h, --help help for link
--unlink Remove existing public link to file/folder

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

List all the remotes in the config file.

rclone listremotes lists all the available remotes from the config file.

When used with the --long flag it lists the types too.

rclone listremotes [flags]


-h, --help help for listremotes
--long Show the type as well as names

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

List directories and objects in remote:path formatted for parsing.

List the contents of the source path (directories and objects) to standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix.

Eg

$ rclone lsf swift:bucket
bevajer5jef
canole
diwogej7
ferejej3gux/
fubuwic

Use the --format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output:

p - path
s - size
t - modification time
h - hash
i - ID of object
o - Original ID of underlying object
m - MimeType of object if known
e - encrypted name
T - tier of storage if known, e.g. "Hot" or "Cool"
M - Metadata of object in JSON blob format, eg {"key":"value"}

So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last.

Eg

$ rclone lsf  --format "tsp" swift:bucket
2016-06-25 18:55:41;60295;bevajer5jef
2016-06-25 18:55:43;90613;canole
2016-06-25 18:55:43;94467;diwogej7
2018-04-26 08:50:45;0;ferejej3gux/
2016-06-25 18:55:40;37600;fubuwic

If you specify "h" in the format you will get the MD5 hash by default, use the --hash flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object (and for directories), "ERROR" if there was an error reading it from the object and "UNSUPPORTED" if that object does not support that hash type.

For example, to emulate the md5sum command you can use

rclone lsf -R --hash MD5 --format hp --separator "  " --files-only .

Eg

$ rclone lsf -R --hash MD5 --format hp --separator "  " --files-only swift:bucket
7908e352297f0f530b84a756f188baa3  bevajer5jef
cd65ac234e6fea5925974a51cdd865cc  canole
03b5341b4f234b9d984d03ad076bae91  diwogej7
8fd37c3810dd660778137ac3a66cc06d  fubuwic
99713e14a4c4ff553acaf1930fad985b  gixacuh7ku

(Though "rclone md5sum ." is an easier way of typing this.)

By default the separator is ";" this can be changed with the --separator flag. Note that separators aren't escaped in the path so putting it last is a good strategy.

Eg

$ rclone lsf  --separator "," --format "tshp" swift:bucket
2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef
2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole
2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7
2018-04-26 08:52:53,0,,ferejej3gux/
2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic

You can output in CSV standard format. This will escape things in " if they contain ,

Eg

$ rclone lsf --csv --files-only --format ps remote:path
test.log,22355
test.sh,449
"this file contains a comma, in the file name.txt",6

Note that the --absolute parameter is useful for making lists of files to pass to an rclone copy with the --files-from-raw flag.

For example, to find all the files modified within one day and copy those only (without traversing the whole directory structure):

rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files
rclone copy --files-from-raw new_files /path/to/local remote:path

Any of the filtering options can be applied to this command.

There are several related list commands

ls to list size and path of objects only
lsl to list modification time, size and path of objects only
lsd to list directories only
lsf to list objects and directories in easy to parse format
lsjson to list objects and directories in JSON format

ls,lsl,lsd are designed to be human-readable. lsf is designed to be human and machine-readable. lsjson is designed to be machine-readable.

Note that ls and lsl recurse by default - use --max-depth 1 to stop the recursion.

The other list commands lsd,lsf,lsjson do not recurse by default - use -R to make them recurse.

Listing a non-existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).

rclone lsf remote:path [flags]


--absolute Put a leading / in front of path names
--csv Output in CSV format
-d, --dir-slash Append a slash to directory names (default true)
--dirs-only Only list directories
--files-only Only list files
-F, --format string Output format - see help for details (default "p")
--hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "md5")
-h, --help help for lsf
-R, --recursive Recurse into the listing
-s, --separator string Separator for the items in the format (default ";")

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

List directories and objects in the path in JSON format.

List directories and objects in the path in JSON format.

The output is an array of Items, where each Item looks like this

{

"Hashes" : {
"SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f",
"MD5" : "b1946ac92492d2347c6235b4d2611184",
"DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc"
},
"ID": "y2djkhiujf83u33",
"OrigID": "UYOJVTUW00Q1RzTDA",
"IsBucket" : false,
"IsDir" : false,
"MimeType" : "application/octet-stream",
"ModTime" : "2017-05-31T16:15:57.034468261+01:00",
"Name" : "file.txt",
"Encrypted" : "v0qpsdq8anpci8n929v3uu9338",
"EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338",
"Path" : "full/path/goes/here/file.txt",
"Size" : 6,
"Tier" : "hot", }

If --hash is not specified the Hashes property won't be emitted. The types of hash can be specified with the --hash-type parameter (which may be repeated). If --hash-type is set then it implies --hash.

If --no-modtime is specified then ModTime will be blank. This can speed things up on remotes where reading the ModTime takes an extra request (e.g. s3, swift).

If --no-mimetype is specified then MimeType will be blank. This can speed things up on remotes where reading the MimeType takes an extra request (e.g. s3, swift).

If --encrypted is not specified the Encrypted won't be emitted.

If --dirs-only is not specified files in addition to directories are returned

If --files-only is not specified directories in addition to the files will be returned.

If --metadata is set then an additional Metadata key will be returned. This will have metdata in rclone standard format as a JSON object.

if --stat is set then a single JSON blob will be returned about the item pointed to. This will return an error if the item isn't found. However on bucket based backends (like s3, gcs, b2, azureblob etc) if the item isn't found it will return an empty directory as it isn't possible to tell empty directories from missing directories there.

The Path field will only show folders below the remote path being listed. If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive the Path will always be the same as Name.

If the directory is a bucket in a bucket-based backend, then "IsBucket" will be set to true. This key won't be present unless it is "true".

The time is in RFC3339 format with up to nanosecond precision. The number of decimal digits in the seconds will depend on the precision that the remote can hold the times, so if times are accurate to the nearest millisecond (e.g. Google Drive) then 3 digits will always be shown ("2017-05-31T16:15:57.034+01:00") whereas if the times are accurate to the nearest second (Dropbox, Box, WebDav, etc.) no digits will be shown ("2017-05-31T16:15:57+01:00").

The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line.

Any of the filtering options can be applied to this command.

There are several related list commands

ls to list size and path of objects only
lsl to list modification time, size and path of objects only
lsd to list directories only
lsf to list objects and directories in easy to parse format
lsjson to list objects and directories in JSON format

ls,lsl,lsd are designed to be human-readable. lsf is designed to be human and machine-readable. lsjson is designed to be machine-readable.

Note that ls and lsl recurse by default - use --max-depth 1 to stop the recursion.

The other list commands lsd,lsf,lsjson do not recurse by default - use -R to make them recurse.

Listing a non-existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).

rclone lsjson remote:path [flags]


--dirs-only Show only directories in the listing
--encrypted Show the encrypted names
--files-only Show only files in the listing
--hash Include hashes in the output (may take longer)
--hash-type stringArray Show only this hash type (may be repeated)
-h, --help help for lsjson
--no-mimetype Don't read the mime type (can speed things up)
--no-modtime Don't read the modification time (can speed things up)
--original Show the ID of the underlying Object
-R, --recursive Recurse into the listing
--stat Just return the info for the pointed to file

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Mount the remote as file system on a mountpoint.

rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.

First set up your remote using rclone config. Check it works with rclone ls etc.

On Linux and macOS, you can run mount in either foreground or background (aka daemon) mode. Mount runs in foreground mode by default. Use the --daemon flag to force background mode. On Windows you can run mount in foreground only, the flag is ignored.

In background mode rclone acts as a generic Unix mount program: the main program starts, spawns background rclone process to setup and maintain the mount, waits until success or timeout and exits with appropriate code (killing the child process if it fails).

On Linux/macOS/FreeBSD start the mount like this, where /path/to/local/mount is an empty existing directory:

rclone mount remote:path/to/files /path/to/local/mount

On Windows you can start a mount in different ways. See below for details. If foreground mount is used interactively from a console window, rclone will serve the mount and occupy the console so another window should be used to work with the mount until rclone is interrupted e.g. by pressing Ctrl-C.

The following examples will mount to an automatically assigned drive, to specific drive letter X:, to path C:\path\parent\mount (where parent directory or drive must exist, and mount must not exist, and is not supported when mounting as a network drive), and the last example will mount as network share \\cloud\remote and map it to an automatically assigned drive:

rclone mount remote:path/to/files *
rclone mount remote:path/to/files X:
rclone mount remote:path/to/files C:\path\parent\mount
rclone mount remote:path/to/files \\cloud\remote

When the program ends while in foreground mode, either via Ctrl+C or receiving a SIGINT or SIGTERM signal, the mount should be automatically stopped.

When running in background mode the user will have to stop the mount manually:

# Linux
fusermount -u /path/to/local/mount
# OS X
umount /path/to/local/mount

The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually.

The size of the mounted file system will be set according to information retrieved from the remote, the same as returned by the rclone about (https://rclone.org/commands/rclone_about/) command. Remotes with unlimited storage may report the used size only, then an additional 1 PiB of free space is assumed. If the remote does not support (https://rclone.org/overview/#optional-features) the about feature at all, then 1 PiB is set as both the total and the free size.

To run rclone mount on Windows, you will need to download and install WinFsp (http://www.secfs.net/winfsp/).

WinFsp (https://github.com/winfsp/winfsp) is an open-source Windows File System Proxy which makes it easy to write user space file systems for Windows. It provides a FUSE emulation layer which rclone uses combination with cgofuse (https://github.com/winfsp/cgofuse). Both of these packages are by Bill Zissimopoulos who was very helpful during the implementation of rclone mount for Windows.

Unlike other operating systems, Microsoft Windows provides a different filesystem type for network and fixed drives. It optimises access on the assumption fixed disk drives are fast and reliable, while network drives have relatively high latency and less reliability. Some settings can also be differentiated between the two types, for example that Windows Explorer should just display icons and not create preview thumbnails for image and video files on network drives.

In most cases, rclone will mount the remote as a normal, fixed disk drive by default. However, you can also choose to mount it as a remote network drive, often described as a network share. If you mount an rclone remote using the default, fixed drive mode and experience unexpected program errors, freezes or other issues, consider mounting as a network drive instead.

When mounting as a fixed disk drive you can either mount to an unused drive letter, or to a path representing a non-existent subdirectory of an existing parent directory or drive. Using the special value * will tell rclone to automatically assign the next available drive letter, starting with Z: and moving backward. Examples:

rclone mount remote:path/to/files *
rclone mount remote:path/to/files X:
rclone mount remote:path/to/files C:\path\parent\mount
rclone mount remote:path/to/files X:

Option --volname can be used to set a custom volume name for the mounted file system. The default is to use the remote name and path.

To mount as network drive, you can add option --network-mode to your mount command. Mounting to a directory path is not supported in this mode, it is a limitation Windows imposes on junctions, so the remote must always be mounted to a drive letter.

rclone mount remote:path/to/files X: --network-mode

A volume name specified with --volname will be used to create the network share path. A complete UNC path, such as \\cloud\remote, optionally with path \\cloud\remote\madeup\path, will be used as is. Any other string will be used as the share part, after a default prefix \\server\. If no volume name is specified then \\server\share will be used. You must make sure the volume name is unique when you are mounting more than one drive, or else the mount command will fail. The share name will treated as the volume label for the mapped drive, shown in Windows Explorer etc, while the complete \\server\share will be reported as the remote UNC path by net use etc, just like a normal network drive mapping.

If you specify a full network share UNC path with --volname, this will implicitely set the --network-mode option, so the following two examples have same result:

rclone mount remote:path/to/files X: --network-mode
rclone mount remote:path/to/files X: --volname \\server\share

You may also specify the network share UNC path as the mountpoint itself. Then rclone will automatically assign a drive letter, same as with * and use that as mountpoint, and instead use the UNC path specified as the volume name, as if it were specified with the --volname option. This will also implicitely set the --network-mode option. This means the following two examples have same result:

rclone mount remote:path/to/files \\cloud\remote
rclone mount remote:path/to/files * --volname \\cloud\remote

There is yet another way to enable network mode, and to set the share path, and that is to pass the "native" libfuse/WinFsp option directly: --fuse-flag --VolumePrefix=\server\share. Note that the path must be with just a single backslash prefix in this case.

Note: In previous versions of rclone this was the only supported method.

Read more about drive mapping (https://en.wikipedia.org/wiki/Drive_mapping)

See also Limitations section below.

The FUSE emulation layer on Windows must convert between the POSIX-based permission model used in FUSE, and the permission model used in Windows, based on access-control lists (ACL).

The mounted filesystem will normally get three entries in its access-control list (ACL), representing permissions for the POSIX permission scopes: Owner, group and others. By default, the owner and group will be taken from the current user, and the built-in group "Everyone" will be used to represent others. The user/group can be customized with FUSE options "UserName" and "GroupName", e.g. -o UserName=user123 -o GroupName="Authenticated Users". The permissions on each entry will be set according to options --dir-perms and --file-perms, which takes a value in traditional numeric notation (https://en.wikipedia.org/wiki/File-system_permissions#Numeric_notation).

The default permissions corresponds to --file-perms 0666 --dir-perms 0777, i.e. read and write permissions to everyone. This means you will not be able to start any programs from the the mount. To be able to do that you must add execute permissions, e.g. --file-perms 0777 --dir-perms 0777 to add it to everyone. If the program needs to write files, chances are you will have to enable VFS File Caching as well (see also limitations).

Note that the mapping of permissions is not always trivial, and the result you see in Windows Explorer may not be exactly like you expected. For example, when setting a value that includes write access, this will be mapped to individual permissions "write attributes", "write data" and "append data", but not "write extended attributes". Windows will then show this as basic permission "Special" instead of "Write", because "Write" includes the "write extended attributes" permission.

If you set POSIX permissions for only allowing access to the owner, using --file-perms 0600 --dir-perms 0700, the user group and the built-in "Everyone" group will still be given some special permissions, such as "read attributes" and "read permissions", in Windows. This is done for compatibility reasons, e.g. to allow users without additional permissions to be able to read basic metadata about files like in UNIX. One case that may arise is that other programs (incorrectly) interprets this as the file being accessible by everyone. For example an SSH client may warn about "unprotected private key file".

WinFsp 2021 (version 1.9) introduces a new FUSE option "FileSecurity", that allows the complete specification of file security descriptors using SDDL (https://docs.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format). With this you can work around issues such as the mentioned "unprotected private key file" by specifying -o FileSecurity="D:P(A;;FA;;;OW)", for file all access (FA) to the owner (OW).

Drives created as Administrator are not visible to other accounts, not even an account that was elevated to Administrator with the User Account Control (UAC) feature. A result of this is that if you mount to a drive letter from a Command Prompt run as Administrator, and then try to access the same drive from Windows Explorer (which does not run as Administrator), you will not be able to see the mounted drive.

If you don't need to access the drive from applications running with administrative privileges, the easiest way around this is to always create the mount from a non-elevated command prompt.

To make mapped drives available to the user account that created them regardless if elevated or not, there is a special Windows setting called linked connections (https://docs.microsoft.com/en-us/troubleshoot/windows-client/networking/mapped-drives-not-available-from-elevated-command#detail-to-configure-the-enablelinkedconnections-registry-entry) that can be enabled.

It is also possible to make a drive mount available to everyone on the system, by running the process creating it as the built-in SYSTEM account. There are several ways to do this: One is to use the command-line utility PsExec (https://docs.microsoft.com/en-us/sysinternals/downloads/psexec), from Microsoft's Sysinternals suite, which has option -s to start processes as the SYSTEM account. Another alternative is to run the mount command from a Windows Scheduled Task, or a Windows Service, configured to run as the SYSTEM account. A third alternative is to use the WinFsp.Launcher infrastructure (https://github.com/winfsp/winfsp/wiki/WinFsp-Service-Architecture)). Note that when running rclone as another user, it will not use the configuration file from your profile unless you tell it to with the --config (https://rclone.org/docs/#config-config-file) option. Read more in the install documentation (https://rclone.org/install/).

Note that mapping to a directory path, instead of a drive letter, does not suffer from the same limitations.

Without the use of --vfs-cache-mode this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without --vfs-cache-mode writes or --vfs-cache-mode full. See the VFS File Caching section for more info.

The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.

When rclone mount is invoked on Unix with --daemon flag, the main rclone program will wait for the background mount to become ready or until the timeout specified by the --daemon-wait flag. On Linux it can check mount status using ProcFS so the flag in fact sets maximum time to wait, while the real wait can be less. On macOS / BSD the time to wait is constant and the check is performed only at the end. We advise you to set wait time on macOS reasonably.

Only supported on Linux, FreeBSD, OS X and Windows at the moment.

File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. Look at the VFS File Caching for solutions to make mount more reliable.

You can use the flag --attr-timeout to set the time the kernel caches the attributes (size, modification time, etc.) for directory entries.

The default is 1s which caches files just long enough to avoid too many callbacks to rclone from the kernel.

In theory 0s should be the correct value for filesystems which can change outside the control of the kernel. However this causes quite a few problems such as rclone using too much memory (https://github.com/rclone/rclone/issues/2157), rclone not serving files to samba (https://forum.rclone.org/t/rclone-1-39-vs-1-40-mount-issue/5112) and excessive time listing directories (https://github.com/rclone/rclone/issues/2095#issuecomment-371141147).

The kernel can cache the info about a file for the time given by --attr-timeout. You may see corruption if the remote file changes length during this window. It will show up as either a truncated file or a file with garbage on the end. With --attr-timeout 1s this is very unlikely but not impossible. The higher you set --attr-timeout the more likely it is. The default setting of "1s" is the lowest setting which mitigates the problems above.

If you set it higher (10s or 1m say) then the kernel will call back to rclone less often making it more efficient, however there is more chance of the corruption issue above.

If files don't change on the remote outside of the control of rclone then there is no chance of corruption.

This is the same as setting the attr_timeout option in mount.fuse.

Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.

When running rclone mount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone mount service specified as a requirement will see all files and folders immediately in this mode.

Note that systemd runs mount units without any environment variables including PATH or HOME. This means that tilde (~) expansion will not work and you should provide --config and --cache-dir explicitly as absolute paths via rclone arguments. Since mounting requires the fusermount program, rclone will use the fallback PATH of /bin:/usr/bin in this scenario. Please ensure that fusermount is present on this PATH.

The core Unix program /bin/mount normally takes the -t FSTYPE argument then runs the /sbin/mount.FSTYPE helper program passing it mount options as -o key=val,... or --opt=.... Automount (classic or systemd) behaves in a similar way.

rclone by default expects GNU-style flags --key val. To run it as a mount helper you should symlink rclone binary to /sbin/mount.rclone and optionally /usr/bin/rclonefs, e.g. ln -s /usr/bin/rclone /sbin/mount.rclone. rclone will detect it and translate command-line arguments appropriately.

Now you can run classic mounts like this:

mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/path/to/pem

or create systemd mount units:

# /etc/systemd/system/mnt-data.mount
[Unit]
After=network-online.target
[Mount]
Type=rclone
What=sftp1:subdir
Where=/mnt/data
Options=rw,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone

optionally accompanied by systemd automount unit

# /etc/systemd/system/mnt-data.automount
[Unit]
After=network-online.target
Before=remote-fs.target
[Automount]
Where=/mnt/data
TimeoutIdleSec=600
[Install]
WantedBy=multi-user.target

or add in /etc/fstab a line like

sftp1:subdir /mnt/data rclone rw,noauto,nofail,_netdev,x-systemd.automount,args2env,vfs_cache_mode=writes,config=/etc/rclone.conf,cache_dir=/var/cache/rclone 0 0

or use classic Automountd. Remember to provide explicit config=...,cache-dir=... as a workaround for mount units being run without HOME.

Rclone in the mount helper mode will split -o argument(s) by comma, replace _ by - and prepend -- to get the command-line flags. Options containing commas or spaces can be wrapped in single or double quotes. Any inner quotes inside outer quotes of the same type should be doubled.

Mount option syntax includes a few extra options treated specially:

env.NAME=VALUE will set an environment variable for the mount process. This helps with Automountd and Systemd.mount which don't allow setting custom environment for mount helpers. Typically you will use env.HTTPS_PROXY=proxy.host:3128 or env.HOME=/root
command=cmount can be used to run cmount or any other rclone command rather than the default mount.
args2env will pass mount options to the mount helper running in background via environment variables instead of command line arguments. This allows to hide secrets from such commands as ps or pgrep.
vv... will be transformed into appropriate --verbose=N
standard mount options like x-systemd.automount, _netdev, nosuid and alike are intended only for Automountd and ignored by rclone.

VFS - Virtual File System

This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.

Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.

The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.

VFS Directory Cache

Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

--dir-cache-time duration   Time to cache directory entries for (default 5m0s)
--poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.

You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

kill -SIGHUP $(pidof rclone)

If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

rclone rc vfs/forget

Or individual files or directories:

rclone rc vfs/forget file=path/to/file dir=path/to/dir

VFS File Buffering

The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared.

This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used.

The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

VFS File Caching

These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.

For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.

Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

--cache-dir string                   Directory rclone will use for caching.
--vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration         Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration   Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration            Time to writeback files after last use when using cache (default 5s)

If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.

If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.

--vfs-cache-mode off

In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.

This will mean some operations are not possible

Files can't be opened for both read AND write
Files opened for write can't be seeked
Existing files opened for write must have O_TRUNC set
Files open for read with O_TRUNC will be opened write only
Files open for write only will behave as if O_TRUNC was supplied
Open modes O_APPEND, O_TRUNC are ignored
If an upload fails it can't be retried

--vfs-cache-mode minimal

This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

These operations are not possible

Files opened for write only can't be seeked
Existing files opened for write must have O_TRUNC set
Files opened for write only will ignore O_APPEND, O_TRUNC
If an upload fails it can't be retried

--vfs-cache-mode writes

In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.

This mode should support all normal file system operations.

If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.

--vfs-cache-mode full

In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.

In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.

So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.

This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes.

When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

When using this mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is set large if required.

IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.

Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from:

size
modification time
hash

where available on an object.

On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object).

For example hash is slow with the local and sftp backends as they have to read the entire file and hash it, and modtime is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra API call to fetch it.

If you use the --vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.

If you are running a vfs cache over local, s3 or swift backends then using this flag is recommended.

Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again.

VFS Chunked Reading

When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

These flags control the chunking:

--vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
--vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)

Rclone will start reading a chunk of size --vfs-read-chunk-size, and then double the size for each read. When --vfs-read-chunk-size-limit is specified, and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled only until the specified value is reached. If the value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely.

With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.

Setting --vfs-read-chunk-size to 0 or "off" disables chunked reading.

VFS Performance

These flags may be used to enable/disable features of the VFS for performance or other reasons. See also the chunked reading feature.

In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

--no-checksum     Don't compare checksums on up/download.
--no-modtime      Don't read/write the modification time (can speed things up).
--no-seek         Don't allow seeking in files.
--read-only       Only allow read-only access.

Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

--vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

--transfers int  Number of file transfers to run in parallel (default 4)

VFS Case Sensitivity

Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.

The --vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.

The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote.

Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".

VFS Disk Options

This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

--vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

Alternate report of used bytes

Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

WARNING. Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.

rclone mount remote:path /path/to/mountpoint [flags]


--allow-non-empty Allow mounting over a non-empty directory (not supported on Windows)
--allow-other Allow access to other users (not supported on Windows)
--allow-root Allow access to root user (not supported on Windows)
--async-read Use asynchronous reads (not supported on Windows) (default true)
--attr-timeout duration Time for which file/directory attributes are cached (default 1s)
--daemon Run mount in background and exit parent process (as background output is suppressed, use --log-file with --log-format=pid,... to monitor) (not supported on Windows)
--daemon-timeout duration Time limit for rclone to respond to kernel (not supported on Windows)
--daemon-wait duration Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s)
--debug-fuse Debug the FUSE internals - needs -v
--default-permissions Makes kernel enforce access control based on the file mode (not supported on Windows)
--devname string Set the device name - default is remote:path
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for mount
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
--noappledouble Ignore Apple Double (._) and .DS_Store files (supported on OSX only) (default true)
--noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only)
-o, --option stringArray Option for libfuse/WinFsp (repeat if required)
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Only allow read-only access
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
--volname string Set the volume name (supported on Windows and OSX only)
--write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows)

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Move file or directory from source to dest.

If source:path is a file or directory then it moves it to a file or directory named dest:path.

This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exactly like the move (https://rclone.org/commands/rclone_move/) command.

So

rclone moveto src dst

where src and dst are rclone paths, either remote:path or /path/to/local or C:.

This will:

if src is file

move it to dst, overwriting an existing file if it exists if src is directory
move it to dst, overwriting existing files if they exist
see move command for full details

This doesn't transfer files that are identical on src and dst, testing by size and modification time or MD5SUM. src will be deleted on successful transfer.

Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.

Note: Use the -P/--progress flag to view real-time transfer statistics.

rclone moveto source:path dest:path [flags]


-h, --help help for moveto

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Explore a remote with a text based user interface.

This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?".

To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along.

You can interact with the user interface using key presses, press '?' to toggle the help on and off. The supported keys are:


↑,↓ or k,j to Move
→,l to enter
←,h to return
c toggle counts
g toggle graph
a toggle average size in directory
u toggle human-readable format
n,s,C,A sort by name,size,count,average size
d delete file/directory
v select file/directory
V enter visual select mode
D delete selected files/directories
y copy current path to clipboard
Y display current path
^L refresh screen (fix screen corruption)
? to toggle help on and off
q/ESC/^c to quit

Listed files/directories may be prefixed by a one-character flag, some of them combined with a description in brackes at end of line. These flags have the following meaning:

e means this is an empty directory, i.e. contains no files (but

may contain empty subdirectories) ~ means this is a directory where some of the files (possibly in
subdirectories) have unknown size, and therefore the directory
size may be underestimated (and average size inaccurate, as it
is average of the files with known sizes). . means an error occurred while reading a subdirectory, and
therefore the directory size may be underestimated (and average
size inaccurate) ! means an error occurred while reading this directory

This an homage to the ncdu tool (https://dev.yorhel.nl/ncdu) but for rclone remotes. It is missing lots of features at the moment but is useful as it stands.

Note that it might take some time to delete big files/directories. The UI won't respond in the meantime since the deletion is done synchronously.

For a non-interactive listing of the remote, see the tree (https://rclone.org/commands/rclone_tree/) command. To just get the total size of the remote you can also use the size (https://rclone.org/commands/rclone_size/) command.

rclone ncdu remote:path [flags]


-h, --help help for ncdu

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Obscure password for use in the rclone config file.

In the rclone config file, human-readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is not a secure way of encrypting these passwords as rclone can decrypt them - it is to prevent "eyedropping" - namely someone seeing a password in the rclone config file by accident.

Many equally important things (like access tokens) are not obscured in the config file. However it is very hard to shoulder surf a 64 character hex token.

This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. This will use the first line of STDIN as the password not including the trailing newline.

echo "secretpassword" | rclone obscure -

If there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself.

If you want to encrypt the config file then please use config file encryption - see rclone config (https://rclone.org/commands/rclone_config/) for more info.

rclone obscure password [flags]


-h, --help help for obscure

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Run a command against a running rclone.

This runs a command against a running rclone. Use the --url flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port"

A username and password can be passed in with --user and --pass.

Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, --user, --pass.

Arguments should be passed in as parameter=value.

The result will be returned as a JSON object by default.

The --json parameter can be used to pass in a JSON blob as an input instead of key=value arguments. This is the only way of passing in more complicated values.

The -o/--opt option can be used to set a key "opt" with key, value options in the form -o key=value or -o key. It can be repeated as many times as required. This is useful for rc commands which take the "opt" parameter which by convention is a dictionary of strings.

-o key=value -o key2

Will place this in the "opt" value

{"key":"value", "key2","")

The -a/--arg option can be used to set strings in the "arg" value. It can be repeated as many times as required. This is useful for rc commands which take the "arg" parameter which by convention is a list of strings.

-a value -a value2

Will place this in the "arg" value

["value", "value2"]

Use --loopback to connect to the rclone instance running rclone rc. This is very useful for testing commands without having to run an rclone rc server, e.g.:

rclone rc --loopback operations/about fs=/

Use rclone rc to see a list of all possible commands.

rclone rc commands parameter [flags]


-a, --arg stringArray Argument placed in the "arg" array
-h, --help help for rc
--json string Input JSON - use instead of key=value args
--loopback If set connect to this rclone instance not via HTTP
--no-output If set, don't output the JSON result
-o, --opt stringArray Option in the form name=value or name placed in the "opt" array
--pass string Password to use to connect to rclone remote control
--url string URL to connect to rclone remote control (default "http://localhost:5572/")
--user string Username to use to rclone remote control

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Copies standard input to file on remote.

rclone rcat reads from standard input (stdin) and copies it to a single remote file.

echo "hello world" | rclone rcat remote:path/to/file
ffmpeg - | rclone rcat remote:path/to/file

If the remote file already exists, it will be overwritten.

rcat will try to upload small files in a single request, which is usually more efficient than the streaming/chunked upload endpoints, which use multiple requests. Exact behaviour depends on the remote. What is considered a small file may be set through --streaming-upload-cutoff. Uploading only starts after the cutoff is reached or if the file ends before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance.

Use the --size flag to preallocate the file in advance at the remote end and actually stream it, even if remote backend doesn't support streaming.

--size should be the exact size of the input stream in bytes. If the size of the stream is different in length to the --size passed in then the transfer will likely fail.

Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you're better off caching locally and then rclone move it to the destination.

rclone rcat remote:path [flags]


-h, --help help for rcat
--size int File size hint to preallocate (default -1)

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Run rclone listening to remote control commands only.

This runs rclone so that it only listens to remote control commands.

This is useful if you are controlling rclone via the rc API.

If you pass in a path to a directory, rclone will serve that directory for GET requests on the URL passed in. It will also open the URL in the browser when rclone is run.

See the rc documentation (https://rclone.org/rc/) for more info on the rc flags.

rclone rcd <path to files to serve>* [flags]


-h, --help help for rcd

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Remove empty directories under the path.

This recursively removes any empty directories (including directories that only contain empty directories), that it finds under the path. The root path itself will also be removed if it is empty, unless you supply the --leave-root flag.

Use command rmdir (https://rclone.org/commands/rclone_rmdir/) to delete just the empty directory given by path, not recurse.

This is useful for tidying up remotes that rclone has left a lot of empty directories in. For example the delete (https://rclone.org/commands/rclone_delete/) command will delete files but leave the directory structure (unless used with option --rmdirs).

To delete a path and any objects in it, use purge (https://rclone.org/commands/rclone_purge/) command.

rclone rmdirs remote:path [flags]


-h, --help help for rmdirs
--leave-root Do not remove root directory if empty

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Update the rclone binary.

This command downloads the latest release of rclone and replaces the currently running binary. The download is verified with a hashsum and cryptographically signed signature.

If used without flags (or with implied --stable flag), this command will install the latest stable release. However, some issues may be fixed (or features added) only in the latest beta release. In such cases you should run the command with the --beta flag, i.e. rclone selfupdate --beta. You can check in advance what version would be installed by adding the --check flag, then repeat the command without it when you are satisfied.

Sometimes the rclone team may recommend you a concrete beta or stable rclone release to troubleshoot your issue or add a bleeding edge feature. The --version VER flag, if given, will update to the concrete version instead of the latest one. If you omit micro version from VER (for example 1.53), the latest matching micro version will be used.

Upon successful update rclone will print a message that contains a previous version number. You will need it if you later decide to revert your update for some reason. Then you'll have to note the previous version and run the following command: rclone selfupdate [--beta] OLDVER. If the old version contains only dots and digits (for example v1.54.0) then it's a stable release so you won't need the --beta flag. Beta releases have an additional information similar to v1.54.0-beta.5111.06f1c0c61. (if you are a developer and use a locally built rclone, the version number will end with -DEV, you will have to rebuild it as it obviously can't be distributed).

If you previously installed rclone via a package manager, the package may include local documentation or configure services. You may wish to update with the flag --package deb or --package rpm (whichever is correct for your OS) to update these too. This command with the default --package zip will update only the rclone executable so the local manual may become inaccurate after it.

The rclone mount command (https://rclone.org/commands/rclone_mount/) may or may not support extended FUSE options depending on the build and OS. selfupdate will refuse to update if the capability would be discarded.

Note: Windows forbids deletion of a currently running executable so this command will rename the old executable to 'rclone.old.exe' upon success.

Please note that this command was not available before rclone version 1.55. If it fails for you with the message unknown command "selfupdate" then you will need to update manually following the install instructions located at https://rclone.org/install/

rclone selfupdate [flags]


--beta Install beta release
--check Check for latest release, do not download
-h, --help help for selfupdate
--output string Save the downloaded binary at a given path (default: replace running binary)
--package string Package format: zip|deb|rpm (default: zip)
--stable Install stable release (this is the default)
--version string Install the given rclone version (default: latest)

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Serve a remote over a protocol.

Serve a remote over a given protocol. Requires the use of a subcommand to specify the protocol, e.g.

rclone serve http remote:

Each subcommand has its own options which you can see in their help.

rclone serve <protocol> [opts] <remote> [flags]


-h, --help help for serve

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.
rclone serve dlna (https://rclone.org/commands/rclone_serve_dlna/) - Serve remote:path over DLNA
rclone serve docker (https://rclone.org/commands/rclone_serve_docker/) - Serve any remote on docker's volume plugin API.
rclone serve ftp (https://rclone.org/commands/rclone_serve_ftp/) - Serve remote:path over FTP.
rclone serve http (https://rclone.org/commands/rclone_serve_http/) - Serve the remote over HTTP.
rclone serve restic (https://rclone.org/commands/rclone_serve_restic/) - Serve the remote for restic's REST API.
rclone serve sftp (https://rclone.org/commands/rclone_serve_sftp/) - Serve the remote over SFTP.
rclone serve webdav (https://rclone.org/commands/rclone_serve_webdav/) - Serve remote:path over WebDAV.

Serve remote:path over DLNA

Run a DLNA media server for media stored in an rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs.

Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no media transcoding support. This means that some players might show files that they are not able to play back correctly.

Server options

Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs.

Use --name to choose the friendly server name, which is by default "rclone (hostname)".

Use --log-trace in conjunction with -vv to enable additional debug logging of all UPNP traffic.

VFS - Virtual File System

This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.

Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.

The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.

VFS Directory Cache

Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

--dir-cache-time duration   Time to cache directory entries for (default 5m0s)
--poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.

You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

kill -SIGHUP $(pidof rclone)

If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

rclone rc vfs/forget

Or individual files or directories:

rclone rc vfs/forget file=path/to/file dir=path/to/dir

VFS File Buffering

The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared.

This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used.

The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

VFS File Caching

These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.

For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.

Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

--cache-dir string                   Directory rclone will use for caching.
--vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration         Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration   Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration            Time to writeback files after last use when using cache (default 5s)

If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.

If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.

--vfs-cache-mode off

In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.

This will mean some operations are not possible

Files can't be opened for both read AND write
Files opened for write can't be seeked
Existing files opened for write must have O_TRUNC set
Files open for read with O_TRUNC will be opened write only
Files open for write only will behave as if O_TRUNC was supplied
Open modes O_APPEND, O_TRUNC are ignored
If an upload fails it can't be retried

--vfs-cache-mode minimal

This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

These operations are not possible

Files opened for write only can't be seeked
Existing files opened for write must have O_TRUNC set
Files opened for write only will ignore O_APPEND, O_TRUNC
If an upload fails it can't be retried

--vfs-cache-mode writes

In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.

This mode should support all normal file system operations.

If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.

--vfs-cache-mode full

In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.

In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.

So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.

This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes.

When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

When using this mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is set large if required.

IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.

Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from:

size
modification time
hash

where available on an object.

On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object).

For example hash is slow with the local and sftp backends as they have to read the entire file and hash it, and modtime is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra API call to fetch it.

If you use the --vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.

If you are running a vfs cache over local, s3 or swift backends then using this flag is recommended.

Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again.

VFS Chunked Reading

When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

These flags control the chunking:

--vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
--vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)

Rclone will start reading a chunk of size --vfs-read-chunk-size, and then double the size for each read. When --vfs-read-chunk-size-limit is specified, and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled only until the specified value is reached. If the value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely.

With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.

Setting --vfs-read-chunk-size to 0 or "off" disables chunked reading.

VFS Performance

These flags may be used to enable/disable features of the VFS for performance or other reasons. See also the chunked reading feature.

In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

--no-checksum     Don't compare checksums on up/download.
--no-modtime      Don't read/write the modification time (can speed things up).
--no-seek         Don't allow seeking in files.
--read-only       Only allow read-only access.

Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

--vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

--transfers int  Number of file transfers to run in parallel (default 4)

VFS Case Sensitivity

Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.

The --vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.

The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote.

Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".

VFS Disk Options

This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

--vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

Alternate report of used bytes

Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

WARNING. Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.

rclone serve dlna remote:path [flags]


--addr string The ip:port or :port to bind the DLNA http server to (default ":7879")
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for dlna
--log-trace Enable trace logging of SOAP traffic
--name string Name of DLNA server
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Only allow read-only access
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone serve (https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol.

Serve any remote on docker's volume plugin API.

This command implements the Docker volume plugin API allowing docker to use rclone as a data storage mechanism for various cloud providers. rclone provides docker volume plugin based on it.

To create a docker plugin, one must create a Unix or TCP socket that Docker will look for when you use the plugin and then it listens for commands from docker daemon and runs the corresponding code when necessary. Docker plugins can run as a managed plugin under control of the docker daemon or as an independent native service. For testing, you can just run it directly from the command line, for example:

sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv

Running rclone serve docker will create the said socket, listening for commands from Docker to create the necessary Volumes. Normally you need not give the --socket-addr flag. The API will listen on the unix domain socket at /run/docker/plugins/rclone.sock. In the example above rclone will create a TCP socket and a small file /etc/docker/plugins/rclone.spec containing the socket address. We use sudo because both paths are writeable only by the root user.

If you later decide to change listening socket, the docker daemon must be restarted to reconnect to /run/docker/plugins/rclone.sock or parse new /etc/docker/plugins/rclone.spec. Until you restart, any volume related docker commands will timeout trying to access the old socket. Running directly is supported on Linux only, not on Windows or MacOS. This is not a problem with managed plugin mode described in details in the full documentation (https://rclone.org/docker).

The command will create volume mounts under the path given by --base-dir (by default /var/lib/docker-volumes/rclone available only to root) and maintain the JSON formatted file docker-plugin.state in the rclone cache directory with book-keeping records of created and mounted volumes.

All mount and VFS options are submitted by the docker daemon via API, but you can also provide defaults on the command line as well as set path to the config file and cache directory or adjust logging verbosity.

VFS - Virtual File System

This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.

Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.

The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.

VFS Directory Cache

Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

--dir-cache-time duration   Time to cache directory entries for (default 5m0s)
--poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.

You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

kill -SIGHUP $(pidof rclone)

If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

rclone rc vfs/forget

Or individual files or directories:

rclone rc vfs/forget file=path/to/file dir=path/to/dir

VFS File Buffering

The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared.

This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used.

The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

VFS File Caching

These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.

For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.

Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

--cache-dir string                   Directory rclone will use for caching.
--vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration         Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration   Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration            Time to writeback files after last use when using cache (default 5s)

If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.

If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.

--vfs-cache-mode off

In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.

This will mean some operations are not possible

Files can't be opened for both read AND write
Files opened for write can't be seeked
Existing files opened for write must have O_TRUNC set
Files open for read with O_TRUNC will be opened write only
Files open for write only will behave as if O_TRUNC was supplied
Open modes O_APPEND, O_TRUNC are ignored
If an upload fails it can't be retried

--vfs-cache-mode minimal

This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

These operations are not possible

Files opened for write only can't be seeked
Existing files opened for write must have O_TRUNC set
Files opened for write only will ignore O_APPEND, O_TRUNC
If an upload fails it can't be retried

--vfs-cache-mode writes

In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.

This mode should support all normal file system operations.

If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.

--vfs-cache-mode full

In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.

In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.

So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.

This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes.

When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

When using this mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is set large if required.

IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.

Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from:

size
modification time
hash

where available on an object.

On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object).

For example hash is slow with the local and sftp backends as they have to read the entire file and hash it, and modtime is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra API call to fetch it.

If you use the --vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.

If you are running a vfs cache over local, s3 or swift backends then using this flag is recommended.

Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again.

VFS Chunked Reading

When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

These flags control the chunking:

--vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
--vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)

Rclone will start reading a chunk of size --vfs-read-chunk-size, and then double the size for each read. When --vfs-read-chunk-size-limit is specified, and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled only until the specified value is reached. If the value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely.

With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.

Setting --vfs-read-chunk-size to 0 or "off" disables chunked reading.

VFS Performance

These flags may be used to enable/disable features of the VFS for performance or other reasons. See also the chunked reading feature.

In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

--no-checksum     Don't compare checksums on up/download.
--no-modtime      Don't read/write the modification time (can speed things up).
--no-seek         Don't allow seeking in files.
--read-only       Only allow read-only access.

Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

--vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

--transfers int  Number of file transfers to run in parallel (default 4)

VFS Case Sensitivity

Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.

The --vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.

The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote.

Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".

VFS Disk Options

This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

--vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

Alternate report of used bytes

Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

WARNING. Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.

rclone serve docker [flags]


--allow-non-empty Allow mounting over a non-empty directory (not supported on Windows)
--allow-other Allow access to other users (not supported on Windows)
--allow-root Allow access to root user (not supported on Windows)
--async-read Use asynchronous reads (not supported on Windows) (default true)
--attr-timeout duration Time for which file/directory attributes are cached (default 1s)
--base-dir string Base directory for volumes (default "/var/lib/docker-volumes/rclone")
--daemon Run mount in background and exit parent process (as background output is suppressed, use --log-file with --log-format=pid,... to monitor) (not supported on Windows)
--daemon-timeout duration Time limit for rclone to respond to kernel (not supported on Windows)
--daemon-wait duration Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s)
--debug-fuse Debug the FUSE internals - needs -v
--default-permissions Makes kernel enforce access control based on the file mode (not supported on Windows)
--devname string Set the device name - default is remote:path
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--forget-state Skip restoring previous state
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for docker
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
--no-spec Do not write spec file
--noappledouble Ignore Apple Double (._) and .DS_Store files (supported on OSX only) (default true)
--noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only)
-o, --option stringArray Option for libfuse/WinFsp (repeat if required)
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Only allow read-only access
--socket-addr string Address <host:port> or absolute path (default: /run/docker/plugins/rclone.sock)
--socket-gid int GID for unix socket (default: current process GID) (default 1000)
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
--volname string Set the volume name (supported on Windows and OSX only)
--write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows)

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone serve (https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol.

Serve remote:path over FTP.

Run a basic FTP server to serve a remote over FTP protocol. This can be viewed with a FTP client or you can make a remote of type FTP to read and write it.

Server options

Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

By default this will serve files without needing a login.

You can set a single username and password with the --user and --pass flags.

VFS - Virtual File System

This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.

Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.

The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.

VFS Directory Cache

Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

--dir-cache-time duration   Time to cache directory entries for (default 5m0s)
--poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.

You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

kill -SIGHUP $(pidof rclone)

If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

rclone rc vfs/forget

Or individual files or directories:

rclone rc vfs/forget file=path/to/file dir=path/to/dir

VFS File Buffering

The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared.

This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used.

The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

VFS File Caching

These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.

For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.

Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

--cache-dir string                   Directory rclone will use for caching.
--vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration         Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration   Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration            Time to writeback files after last use when using cache (default 5s)

If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.

If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.

--vfs-cache-mode off

In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.

This will mean some operations are not possible

Files can't be opened for both read AND write
Files opened for write can't be seeked
Existing files opened for write must have O_TRUNC set
Files open for read with O_TRUNC will be opened write only
Files open for write only will behave as if O_TRUNC was supplied
Open modes O_APPEND, O_TRUNC are ignored
If an upload fails it can't be retried

--vfs-cache-mode minimal

This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

These operations are not possible

Files opened for write only can't be seeked
Existing files opened for write must have O_TRUNC set
Files opened for write only will ignore O_APPEND, O_TRUNC
If an upload fails it can't be retried

--vfs-cache-mode writes

In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.

This mode should support all normal file system operations.

If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.

--vfs-cache-mode full

In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.

In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.

So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.

This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes.

When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

When using this mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is set large if required.

IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.

Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from:

size
modification time
hash

where available on an object.

On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object).

For example hash is slow with the local and sftp backends as they have to read the entire file and hash it, and modtime is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra API call to fetch it.

If you use the --vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.

If you are running a vfs cache over local, s3 or swift backends then using this flag is recommended.

Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again.

VFS Chunked Reading

When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

These flags control the chunking:

--vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
--vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)

Rclone will start reading a chunk of size --vfs-read-chunk-size, and then double the size for each read. When --vfs-read-chunk-size-limit is specified, and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled only until the specified value is reached. If the value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely.

With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.

Setting --vfs-read-chunk-size to 0 or "off" disables chunked reading.

VFS Performance

These flags may be used to enable/disable features of the VFS for performance or other reasons. See also the chunked reading feature.

In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

--no-checksum     Don't compare checksums on up/download.
--no-modtime      Don't read/write the modification time (can speed things up).
--no-seek         Don't allow seeking in files.
--read-only       Only allow read-only access.

Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

--vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

--transfers int  Number of file transfers to run in parallel (default 4)

VFS Case Sensitivity

Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.

The --vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.

The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote.

Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".

VFS Disk Options

This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

--vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

Alternate report of used bytes

Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

WARNING. Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.

Auth Proxy

If you supply the parameter --auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocol with input on STDIN and output on STDOUT.

PLEASE NOTE: --auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored.

There is an example program bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/test_proxy.py) in the rclone source code.

The program's job is to take a user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config.

This config generated must have this extra parameter - _root - root to use for the backend

And it may have this parameter - _obscure - comma separated strings for parameters to obscure

If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

{

"user": "me",
"pass": "mypassword" }

If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

{

"user": "me",
"public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" }

And as an example return this on STDOUT

{

"type": "sftp",
"_root": "",
"_obscure": "pass",
"user": "me",
"pass": "mypassword",
"host": "sftp.example.com" }

This would mean that an SFTP backend would be created on the fly for the user and pass/public_key returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends).

The program can manipulate the supplied user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you'd probably want to restrict the host to a limited list.

Note that an internal cache is keyed on user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.

This can be used to build general purpose proxies to any kind of backend that rclone supports.

rclone serve ftp remote:path [flags]


--addr string IPaddress:Port or :Port to bind server to (default "localhost:2121")
--auth-proxy string A program to use to create the backend from the auth
--cert string TLS PEM key (concatenation of certificate and CA certificate)
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for ftp
--key string TLS PEM Private key
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
--pass string Password for authentication (empty value allow every password)
--passive-port string Passive port range to use (default "30000-32000")
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--public-ip string Public IP address to advertise for passive connections
--read-only Only allow read-only access
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--user string User name for authentication (default "anonymous")
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone serve (https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol.

Serve the remote over HTTP.

Run a basic web server to serve a remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.

You can use the filter flags (e.g. --include, --exclude) to control what is served.

The server will log errors. Use -v to see access logs.

--bwlimit will be respected for file transfers. Use --stats to control the stats printing.

Server options

Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

--baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.

By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.

--cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.

--template allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages:

Parameter Description
.Name The full path of a file/directory.
.Title Directory listing of .Name
.Sort The current sort used. This is changeable via ?sort= parameter
Sort Options: namedirfirst,name,size,time (default namedirfirst)
.Order The current ordering used. This is changeable via ?order= parameter
Order Options: asc,desc (default asc)
.Query Currently unused.
.Breadcrumb Allows for creating a relative navigation
-- .Link The relative to the root link of the Text.
-- .Text The Name of the directory.
.Entries Information about a specific file/directory.
-- .URL The 'url' of an entry.
-- .Leaf Currently same as 'URL' but intended to be 'just' the name.
-- .IsDir Boolean for if an entry is a directory or not.
-- .Size Size in Bytes of the entry.
-- .ModTime The UTC timestamp of an entry.

By default this will serve files without needing a login.

You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.

Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

To create an htpasswd file:

touch htpasswd
htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUser

The password file can be updated while rclone is running.

Use --realm to set the authentication realm.

Use --salt to change the password hashing salt from the default.

VFS - Virtual File System

This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.

Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.

The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.

VFS Directory Cache

Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

--dir-cache-time duration   Time to cache directory entries for (default 5m0s)
--poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.

You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

kill -SIGHUP $(pidof rclone)

If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

rclone rc vfs/forget

Or individual files or directories:

rclone rc vfs/forget file=path/to/file dir=path/to/dir

VFS File Buffering

The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared.

This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used.

The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

VFS File Caching

These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.

For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.

Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

--cache-dir string                   Directory rclone will use for caching.
--vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration         Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration   Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration            Time to writeback files after last use when using cache (default 5s)

If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.

If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.

--vfs-cache-mode off

In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.

This will mean some operations are not possible

Files can't be opened for both read AND write
Files opened for write can't be seeked
Existing files opened for write must have O_TRUNC set
Files open for read with O_TRUNC will be opened write only
Files open for write only will behave as if O_TRUNC was supplied
Open modes O_APPEND, O_TRUNC are ignored
If an upload fails it can't be retried

--vfs-cache-mode minimal

This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

These operations are not possible

Files opened for write only can't be seeked
Existing files opened for write must have O_TRUNC set
Files opened for write only will ignore O_APPEND, O_TRUNC
If an upload fails it can't be retried

--vfs-cache-mode writes

In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.

This mode should support all normal file system operations.

If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.

--vfs-cache-mode full

In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.

In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.

So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.

This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes.

When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

When using this mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is set large if required.

IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.

Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from:

size
modification time
hash

where available on an object.

On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object).

For example hash is slow with the local and sftp backends as they have to read the entire file and hash it, and modtime is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra API call to fetch it.

If you use the --vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.

If you are running a vfs cache over local, s3 or swift backends then using this flag is recommended.

Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again.

VFS Chunked Reading

When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

These flags control the chunking:

--vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
--vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)

Rclone will start reading a chunk of size --vfs-read-chunk-size, and then double the size for each read. When --vfs-read-chunk-size-limit is specified, and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled only until the specified value is reached. If the value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely.

With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.

Setting --vfs-read-chunk-size to 0 or "off" disables chunked reading.

VFS Performance

These flags may be used to enable/disable features of the VFS for performance or other reasons. See also the chunked reading feature.

In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

--no-checksum     Don't compare checksums on up/download.
--no-modtime      Don't read/write the modification time (can speed things up).
--no-seek         Don't allow seeking in files.
--read-only       Only allow read-only access.

Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

--vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

--transfers int  Number of file transfers to run in parallel (default 4)

VFS Case Sensitivity

Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.

The --vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.

The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote.

Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".

VFS Disk Options

This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

--vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

Alternate report of used bytes

Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

WARNING. Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.

rclone serve http remote:path [flags]


--addr string IPaddress:Port or :Port to bind server to (default "127.0.0.1:8080")
--baseurl string Prefix for URLs - leave blank for root
--cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for http
--htpasswd string A htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
--pass string Password for authentication
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Only allow read-only access
--realm string Realm for authentication
--salt string Password hashing salt (default "dlPL2MqE")
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--template string User-specified template
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--user string User name for authentication
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone serve (https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol.

Serve the remote for restic's REST API.

Run a basic web server to serve a remove over restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.

Restic (https://restic.net/) is a command-line program for doing backups.

The server will log errors. Use -v to see access logs.

--bwlimit will be respected for file transfers. Use --stats to control the stats printing.

First set up a remote for your chosen cloud provider (https://rclone.org/docs/#configure).

Once you have set up the remote, check it is working with, for example "rclone lsd remote:". You may have called the remote something other than "remote:" - just substitute whatever you called it in the following instructions.

Now start the rclone restic server

rclone serve restic -v remote:backup

Where you can replace "backup" in the above by whatever path in the remote you wish to use.

By default this will serve on "localhost:8080" you can change this with use of the --addr flag.

You might wish to start this server on boot.

Adding --cache-objects=false will cause rclone to stop caching objects returned from the List call. Caching is normally desirable as it speeds up downloading objects, saves transactions and uses very little memory.

Now you can follow the restic instructions (http://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#rest-server) on setting up restic.

Note that you will need restic 0.8.2 or later to interoperate with rclone.

For the example above you will want to use "http://localhost:8080/" as the URL for the REST server.

For example:

$ export RESTIC_REPOSITORY=rest:http://localhost:8080/
$ export RESTIC_PASSWORD=yourpassword
$ restic init
created restic backend 8b1a4b56ae at rest:http://localhost:8080/
Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.
$ restic backup /path/to/files/to/backup
scan [/path/to/files/to/backup]
scanned 189 directories, 312 files in 0:00
[0:00] 100.00%  38.128 MiB / 38.128 MiB  501 / 501 items  0 errors  ETA 0:00
duration: 0:00
snapshot 45c8fdd8 saved

Note that you can use the endpoint to host multiple repositories. Do this by adding a directory name or path after the URL. Note that these must end with /. Eg

$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/
# backup user1 stuff
$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
# backup user2 stuff

The--private-repos flag can be used to limit users to repositories starting with a path of /<username>/.

Server options

Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

--baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.

--template allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages:

Parameter Description
.Name The full path of a file/directory.
.Title Directory listing of .Name
.Sort The current sort used. This is changeable via ?sort= parameter
Sort Options: namedirfirst,name,size,time (default namedirfirst)
.Order The current ordering used. This is changeable via ?order= parameter
Order Options: asc,desc (default asc)
.Query Currently unused.
.Breadcrumb Allows for creating a relative navigation
-- .Link The relative to the root link of the Text.
-- .Text The Name of the directory.
.Entries Information about a specific file/directory.
-- .URL The 'url' of an entry.
-- .Leaf Currently same as 'URL' but intended to be 'just' the name.
-- .IsDir Boolean for if an entry is a directory or not.
-- .Size Size in Bytes of the entry.
-- .ModTime The UTC timestamp of an entry.

By default this will serve files without needing a login.

You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.

Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

To create an htpasswd file:

touch htpasswd
htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUser

The password file can be updated while rclone is running.

Use --realm to set the authentication realm.

By default this will serve over HTTP. If you want you can serve over HTTPS. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.

--cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.

rclone serve restic remote:path [flags]


--addr string IPaddress:Port or :Port to bind server to (default "localhost:8080")
--append-only Disallow deletion of repository data
--baseurl string Prefix for URLs - leave blank for root
--cache-objects Cache listed objects (default true)
--cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
-h, --help help for restic
--htpasswd string htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
--pass string Password for authentication
--private-repos Users can only access their private repo
--realm string Realm for authentication (default "rclone")
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--stdio Run an HTTP2 server on stdin/stdout
--template string User-specified template
--user string User name for authentication

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone serve (https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol.

Serve the remote over SFTP.

Run a SFTP server to serve a remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it.

You can use the filter flags (e.g. --include, --exclude) to control what is served.

The server will log errors. Use -v to see access logs.

--bwlimit will be respected for file transfers. Use --stats to control the stats printing.

You must provide some means of authentication, either with --user/--pass, an authorized keys file (specify location with --authorized-keys - the default is the same as ssh), an --auth-proxy, or set the --no-auth flag for no authentication when logging in.

Note that this also implements a small number of shell commands so that it can provide md5sum/sha1sum/df information for the rclone sftp backend. This means that is can support SHA1SUMs, MD5SUMs and the about command when paired with the rclone sftp backend.

If you don't supply a host --key then rclone will generate rsa, ecdsa and ed25519 variants, and cache them for later use in rclone's cache directory (see rclone help flags cache-dir) in the "serve-sftp" directory.

By default the server binds to localhost:2022 - if you want it to be reachable externally then supply --addr :2022 for example.

Note that the default of --vfs-cache-mode off is fine for the rclone sftp backend, but it may not be with other SFTP clients.

If --stdio is specified, rclone will serve SFTP over stdio, which can be used with sshd via ~/.ssh/authorized_keys, for example:

restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ...

On the client you need to set --transfers 1 when using --stdio. Otherwise multiple instances of the rclone server are started by OpenSSH which can lead to "corrupted on transfer" errors. This is the case because the client chooses indiscriminately which server to send commands to while the servers all have different views of the state of the filing system.

The "restrict" in authorized_keys prevents SHA1SUMs and MD5SUMs from beeing used. Omitting "restrict" and using --sftp-path-override to enable checksumming is possible but less secure and you could use the SFTP server provided by OpenSSH in this case.

VFS - Virtual File System

This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.

Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.

The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.

VFS Directory Cache

Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

--dir-cache-time duration   Time to cache directory entries for (default 5m0s)
--poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.

You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

kill -SIGHUP $(pidof rclone)

If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

rclone rc vfs/forget

Or individual files or directories:

rclone rc vfs/forget file=path/to/file dir=path/to/dir

VFS File Buffering

The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared.

This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used.

The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

VFS File Caching

These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.

For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.

Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

--cache-dir string                   Directory rclone will use for caching.
--vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration         Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration   Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration            Time to writeback files after last use when using cache (default 5s)

If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.

If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.

--vfs-cache-mode off

In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.

This will mean some operations are not possible

Files can't be opened for both read AND write
Files opened for write can't be seeked
Existing files opened for write must have O_TRUNC set
Files open for read with O_TRUNC will be opened write only
Files open for write only will behave as if O_TRUNC was supplied
Open modes O_APPEND, O_TRUNC are ignored
If an upload fails it can't be retried

--vfs-cache-mode minimal

This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

These operations are not possible

Files opened for write only can't be seeked
Existing files opened for write must have O_TRUNC set
Files opened for write only will ignore O_APPEND, O_TRUNC
If an upload fails it can't be retried

--vfs-cache-mode writes

In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.

This mode should support all normal file system operations.

If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.

--vfs-cache-mode full

In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.

In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.

So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.

This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes.

When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

When using this mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is set large if required.

IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.

Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from:

size
modification time
hash

where available on an object.

On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object).

For example hash is slow with the local and sftp backends as they have to read the entire file and hash it, and modtime is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra API call to fetch it.

If you use the --vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.

If you are running a vfs cache over local, s3 or swift backends then using this flag is recommended.

Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again.

VFS Chunked Reading

When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

These flags control the chunking:

--vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
--vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)

Rclone will start reading a chunk of size --vfs-read-chunk-size, and then double the size for each read. When --vfs-read-chunk-size-limit is specified, and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled only until the specified value is reached. If the value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely.

With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.

Setting --vfs-read-chunk-size to 0 or "off" disables chunked reading.

VFS Performance

These flags may be used to enable/disable features of the VFS for performance or other reasons. See also the chunked reading feature.

In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

--no-checksum     Don't compare checksums on up/download.
--no-modtime      Don't read/write the modification time (can speed things up).
--no-seek         Don't allow seeking in files.
--read-only       Only allow read-only access.

Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

--vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

--transfers int  Number of file transfers to run in parallel (default 4)

VFS Case Sensitivity

Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.

The --vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.

The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote.

Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".

VFS Disk Options

This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

--vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

Alternate report of used bytes

Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

WARNING. Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.

Auth Proxy

If you supply the parameter --auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocol with input on STDIN and output on STDOUT.

PLEASE NOTE: --auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored.

There is an example program bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/test_proxy.py) in the rclone source code.

The program's job is to take a user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config.

This config generated must have this extra parameter - _root - root to use for the backend

And it may have this parameter - _obscure - comma separated strings for parameters to obscure

If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

{

"user": "me",
"pass": "mypassword" }

If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

{

"user": "me",
"public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" }

And as an example return this on STDOUT

{

"type": "sftp",
"_root": "",
"_obscure": "pass",
"user": "me",
"pass": "mypassword",
"host": "sftp.example.com" }

This would mean that an SFTP backend would be created on the fly for the user and pass/public_key returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends).

The program can manipulate the supplied user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you'd probably want to restrict the host to a limited list.

Note that an internal cache is keyed on user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.

This can be used to build general purpose proxies to any kind of backend that rclone supports.

rclone serve sftp remote:path [flags]


--addr string IPaddress:Port or :Port to bind server to (default "localhost:2022")
--auth-proxy string A program to use to create the backend from the auth
--authorized-keys string Authorized keys file (default "~/.ssh/authorized_keys")
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for sftp
--key stringArray SSH private host key file (Can be multi-valued, leave blank to auto generate)
--no-auth Allow connections with no authentication if set
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
--pass string Password for authentication
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Only allow read-only access
--stdio Run an sftp server on run stdin/stdout
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--user string User name for authentication
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone serve (https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol.

Serve remote:path over WebDAV.

Run a basic WebDAV server to serve a remote over HTTP via the WebDAV protocol. This can be viewed with a WebDAV client, through a web browser, or you can make a remote of type WebDAV to read and write it.

This controls the ETag header. Without this flag the ETag will be based on the ModTime and Size of the object.

If this flag is set to "auto" then rclone will choose the first supported hash on the backend or you can use a named hash such as "MD5" or "SHA-1". Use the hashsum (https://rclone.org/commands/rclone_hashsum/) command to see the full list.

Server options

Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.

If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.

--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.

--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.

--baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.

--template allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages:

Parameter Description
.Name The full path of a file/directory.
.Title Directory listing of .Name
.Sort The current sort used. This is changeable via ?sort= parameter
Sort Options: namedirfirst,name,size,time (default namedirfirst)
.Order The current ordering used. This is changeable via ?order= parameter
Order Options: asc,desc (default asc)
.Query Currently unused.
.Breadcrumb Allows for creating a relative navigation
-- .Link The relative to the root link of the Text.
-- .Text The Name of the directory.
.Entries Information about a specific file/directory.
-- .URL The 'url' of an entry.
-- .Leaf Currently same as 'URL' but intended to be 'just' the name.
-- .IsDir Boolean for if an entry is a directory or not.
-- .Size Size in Bytes of the entry.
-- .ModTime The UTC timestamp of an entry.

By default this will serve files without needing a login.

You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.

Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

To create an htpasswd file:

touch htpasswd
htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUser

The password file can be updated while rclone is running.

Use --realm to set the authentication realm.

By default this will serve over HTTP. If you want you can serve over HTTPS. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.

--cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.

VFS - Virtual File System

This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.

Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.

The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.

VFS Directory Cache

Using the --dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

--dir-cache-time duration   Time to cache directory entries for (default 5m0s)
--poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.

You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

kill -SIGHUP $(pidof rclone)

If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

rclone rc vfs/forget

Or individual files or directories:

rclone rc vfs/forget file=path/to/file dir=path/to/dir

VFS File Buffering

The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared.

This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used.

The maximum memory used by rclone for buffering can be up to --buffer-size * open files.

VFS File Caching

These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility.

For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details.

Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

--cache-dir string                   Directory rclone will use for caching.
--vfs-cache-mode CacheMode           Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration         Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration   Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration            Time to writeback files after last use when using cache (default 5s)

If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.

The cache has 4 different modes selected by --vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.

Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.

If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.

You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.

--vfs-cache-mode off

In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.

This will mean some operations are not possible

Files can't be opened for both read AND write
Files opened for write can't be seeked
Existing files opened for write must have O_TRUNC set
Files open for read with O_TRUNC will be opened write only
Files open for write only will behave as if O_TRUNC was supplied
Open modes O_APPEND, O_TRUNC are ignored
If an upload fails it can't be retried

--vfs-cache-mode minimal

This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.

These operations are not possible

Files opened for write only can't be seeked
Existing files opened for write must have O_TRUNC set
Files opened for write only will ignore O_APPEND, O_TRUNC
If an upload fails it can't be retried

--vfs-cache-mode writes

In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.

This mode should support all normal file system operations.

If an upload fails it will be retried at exponentially increasing intervals up to 1 minute.

--vfs-cache-mode full

In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.

In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.

So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.

This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes.

When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.

When using this mode it is recommended that --buffer-size is not set too large and --vfs-read-ahead is set large if required.

IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.

Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from:

size
modification time
hash

where available on an object.

On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object).

For example hash is slow with the local and sftp backends as they have to read the entire file and hash it, and modtime is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra API call to fetch it.

If you use the --vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.

If you are running a vfs cache over local, s3 or swift backends then using this flag is recommended.

Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again.

VFS Chunked Reading

When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

These flags control the chunking:

--vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
--vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)

Rclone will start reading a chunk of size --vfs-read-chunk-size, and then double the size for each read. When --vfs-read-chunk-size-limit is specified, and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled only until the specified value is reached. If the value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely.

With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.

Setting --vfs-read-chunk-size to 0 or "off" disables chunked reading.

VFS Performance

These flags may be used to enable/disable features of the VFS for performance or other reasons. See also the chunked reading feature.

In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

--no-checksum     Don't compare checksums on up/download.
--no-modtime      Don't read/write the modification time (can speed things up).
--no-seek         Don't allow seeking in files.
--read-only       Only allow read-only access.

Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

--vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

--transfers int  Number of file transfers to run in parallel (default 4)

VFS Case Sensitivity

Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.

File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.

Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.

The --vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.

The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote.

Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.

If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".

VFS Disk Options

This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

--vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

Alternate report of used bytes

Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

WARNING. Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.

Auth Proxy

If you supply the parameter --auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocol with input on STDIN and output on STDOUT.

PLEASE NOTE: --auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored.

There is an example program bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/test_proxy.py) in the rclone source code.

The program's job is to take a user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config.

This config generated must have this extra parameter - _root - root to use for the backend

And it may have this parameter - _obscure - comma separated strings for parameters to obscure

If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

{

"user": "me",
"pass": "mypassword" }

If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

{

"user": "me",
"public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" }

And as an example return this on STDOUT

{

"type": "sftp",
"_root": "",
"_obscure": "pass",
"user": "me",
"pass": "mypassword",
"host": "sftp.example.com" }

This would mean that an SFTP backend would be created on the fly for the user and pass/public_key returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends).

The program can manipulate the supplied user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you'd probably want to restrict the host to a limited list.

Note that an internal cache is keyed on user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.

This can be used to build general purpose proxies to any kind of backend that rclone supports.

rclone serve webdav remote:path [flags]


--addr string IPaddress:Port or :Port to bind server to (default "localhost:8080")
--auth-proxy string A program to use to create the backend from the auth
--baseurl string Prefix for URLs - leave blank for root
--cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--disable-dir-list Disable HTML directory list on GET request for a directory
--etag-hash string Which hash to use for the ETag, or auto or blank for off
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for webdav
--htpasswd string htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
--pass string Password for authentication
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Only allow read-only access
--realm string Realm for authentication (default "rclone")
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--template string User-specified template
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--user string User name for authentication
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match
--vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
--vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
--vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone serve (https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol.

Changes storage class/tier of objects in remote.

rclone settier changes storage tier or class at remote if supported. Few cloud storage services provides different storage classes on objects, for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline etc.

Note that, certain tier changes make objects not available to access immediately. For example tiering to archive in azure blob storage makes objects in frozen state, user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object inaccessible.true

You can use it to tier single object

rclone settier Cool remote:path/file

Or use rclone filters to set tier on only specific files

rclone --include "*.txt" settier Hot remote:path/dir

Or just provide remote directory and all files in directory will be tiered

rclone settier tier remote:path/dir
rclone settier tier remote:path [flags]


-h, --help help for settier

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

Run a test command

Rclone test is used to run test commands.

Select which test comand you want with the subcommand, eg

rclone test memory remote:

Each subcommand has its own options which you can see in their help.

NB Be careful running these commands, they may do strange things so reading their documentation first is recommended.


-h, --help help for test

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.
rclone test changenotify (https://rclone.org/commands/rclone_test_changenotify/) - Log any change notify requests for the remote passed in.
rclone test histogram (https://rclone.org/commands/rclone_test_histogram/) - Makes a histogram of file name characters.
rclone test info (https://rclone.org/commands/rclone_test_info/) - Discovers file name or other limitations for paths.
rclone test makefile (https://rclone.org/commands/rclone_test_makefile/) - Make files with random contents of the size given
rclone test makefiles (https://rclone.org/commands/rclone_test_makefiles/) - Make a random file hierarchy in a directory
rclone test memory (https://rclone.org/commands/rclone_test_memory/) - Load all the objects at remote:path into memory and report memory stats.

Log any change notify requests for the remote passed in.

rclone test changenotify remote: [flags]


-h, --help help for changenotify
--poll-interval duration Time to wait between polling for changes (default 10s)

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone test (https://rclone.org/commands/rclone_test/) - Run a test command

Makes a histogram of file name characters.

This command outputs JSON which shows the histogram of characters used in filenames in the remote:path specified.

The data doesn't contain any identifying information but is useful for the rclone developers when developing filename compression.

rclone test histogram [remote:path] [flags]


-h, --help help for histogram

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone test (https://rclone.org/commands/rclone_test/) - Run a test command

Discovers file name or other limitations for paths.

rclone info discovers what filenames and upload methods are possible to write to the paths passed in and how long they can be. It can take some time. It will write test files into the remote:path passed in. It outputs a bit of go code for each one.

NB this can create undeletable files and other hazards - use with care

rclone test info [remote:path]+ [flags]


--all Run all tests
--check-control Check control characters
--check-length Check max filename length
--check-normalization Check UTF-8 Normalization
--check-streaming Check uploads with indeterminate file size
-h, --help help for info
--upload-wait duration Wait after writing a file
--write-json string Write results to file

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone test (https://rclone.org/commands/rclone_test/) - Run a test command

Make files with random contents of the size given

rclone test makefile <size> [<file>]+ [flags]


--ascii Fill files with random ASCII printable bytes only
--chargen Fill files with a ASCII chargen pattern
-h, --help help for makefile
--pattern Fill files with a periodic pattern
--seed int Seed for the random number generator (0 for random) (default 1)
--sparse Make the files sparse (appear to be filled with ASCII 0x00)
--zero Fill files with ASCII 0x00

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone test (https://rclone.org/commands/rclone_test/) - Run a test command

Make a random file hierarchy in a directory

rclone test makefiles <dir> [flags]


--ascii Fill files with random ASCII printable bytes only
--chargen Fill files with a ASCII chargen pattern
--files int Number of files to create (default 1000)
--files-per-directory int Average number of files per directory (default 10)
-h, --help help for makefiles
--max-file-size SizeSuffix Maximum size of files to create (default 100)
--max-name-length int Maximum size of file names (default 12)
--min-file-size SizeSuffix Minimum size of file to create
--min-name-length int Minimum size of file names (default 4)
--pattern Fill files with a periodic pattern
--seed int Seed for the random number generator (0 for random) (default 1)
--sparse Make the files sparse (appear to be filled with ASCII 0x00)
--zero Fill files with ASCII 0x00

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone test (https://rclone.org/commands/rclone_test/) - Run a test command

Load all the objects at remote:path into memory and report memory stats.

rclone test memory remote:path [flags]


-h, --help help for memory

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone test (https://rclone.org/commands/rclone_test/) - Run a test command

Create new file or change file modification time.

Set the modification time on file(s) as specified by remote:path to have the current time.

If remote:path does not exist then a zero sized file will be created, unless --no-create or --recursive is provided.

If --recursive is used then recursively sets the modification time on all existing files that is found under the path. Filters are supported, and you can test with the --dry-run or the --interactive flag.

If --timestamp is used then sets the modification time to that time instead of the current time. Times may be specified as one of:

'YYMMDD' - e.g. 17.10.30
'YYYY-MM-DDTHH:MM:SS' - e.g. 2006-01-02T15:04:05
'YYYY-MM-DDTHH:MM:SS.SSS' - e.g. 2006-01-02T15:04:05.123456789

Note that value of --timestamp is in UTC. If you want local time then add the --localtime flag.

rclone touch remote:path [flags]


-h, --help help for touch
--localtime Use localtime for timestamp, not UTC
-C, --no-create Do not create the file if it does not exist (implied with --recursive)
-R, --recursive Recursively touch all files
-t, --timestamp string Use specified time instead of the current time of day

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

List the contents of the remote in a tree like fashion.

rclone tree lists the contents of a remote in a similar way to the unix tree command.

For example

$ rclone tree remote:path
/
├── file1
├── file2
├── file3
└── subdir

├── file4
└── file5 1 directories, 5 files

You can use any of the filtering options with the tree command (e.g. --include and --exclude. You can also use --fast-list.

The tree command has many options for controlling the listing which are compatible with the tree command, for example you can include file sizes with --size. Note that not all of them have short options as they conflict with rclone's short options.

For a more interactive navigation of the remote see the ncdu (https://rclone.org/commands/rclone_ncdu/) command.

rclone tree remote:path [flags]


-a, --all All files are listed (list . files too)
-C, --color Turn colorization on always
-d, --dirs-only List directories only
--dirsfirst List directories before files (-U disables)
--full-path Print the full path prefix for each file
-h, --help help for tree
--level int Descend only level directories deep
-D, --modtime Print the date of last modification.
--noindent Don't print indentation lines
--noreport Turn off file/directory count at end of tree listing
-o, --output string Output to file instead of stdout
-p, --protections Print the protections for each file.
-Q, --quote Quote filenames with double quotes.
-s, --size Print the size in bytes of each file.
--sort string Select sort: name,version,size,mtime,ctime
--sort-ctime Sort files by last status change time
-t, --sort-modtime Sort files by last modification time
-r, --sort-reverse Reverse the order of the sort
-U, --unsorted Leave files unsorted
--version Sort files alphanumerically by version

See the global flags page (https://rclone.org/flags/) for global options not listed here.

SEE ALSO

rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends.

rclone normally syncs or copies directories. However, if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error Failed to create file system for "remote:file": is a file not a directory if it isn't.

For example, suppose you have a remote with a file in called test.jpg, then you could copy just that file like this

rclone copy remote:test.jpg /tmp/download

The file test.jpg will be placed inside /tmp/download.

This is equivalent to specifying

rclone copy --files-from /tmp/files remote: /tmp/download

Where /tmp/files contains the single line

test.jpg

It is recommended to use copy when copying individual files, not sync. They have pretty much the same effect but copy will use a lot less memory.

The syntax of the paths passed to the rclone command are as follows.

This refers to the local file system.

On Windows \ may be used instead of / in local paths only, non local paths must use /. See local filesystem (https://rclone.org/local/#paths-on-windows) documentation for more about Windows-specific paths.

These paths needn't start with a leading / - if they don't then they will be relative to the current directory.

This refers to a directory path/to/dir on remote: as defined in the config file (configured with rclone config).

On most backends this is refers to the same directory as remote:path/to/dir and that format should be preferred. On a very small number of remotes (FTP, SFTP, Dropbox for business) this will refer to a different directory. On these, paths without a leading / will refer to your "home" directory and paths with a leading / will refer to the root.

This is an advanced form for creating remotes on the fly. backend should be the name or prefix of a backend (the type in the config file) and all the configuration for the backend should be provided on the command line (or in environment variables).

Here are some examples:

rclone lsd --http-url https://pub.rclone.org :http:

To list all the directories in the root of https://pub.rclone.org/.

rclone lsf --http-url https://example.com :http:path/to/dir

To list files and directories in https://example.com/path/to/dir/

rclone copy --http-url https://example.com :http:path/to/dir /tmp/dir

To copy files and directories in https://example.com/path/to/dir to /tmp/dir.

rclone copy --sftp-host example.com :sftp:path/to/dir /tmp/dir

To copy files and directories from example.com in the relative directory path/to/dir to /tmp/dir using sftp.

The above examples can also be written using a connection string syntax, so instead of providing the arguments as command line parameters --http-url https://pub.rclone.org they are provided as part of the remote specification as a kind of connection string.

rclone lsd ":http,url='https://pub.rclone.org':"
rclone lsf ":http,url='https://example.com':path/to/dir"
rclone copy ":http,url='https://example.com':path/to/dir" /tmp/dir
rclone copy :sftp,host=example.com:path/to/dir /tmp/dir

These can apply to modify existing remotes as well as create new remotes with the on the fly syntax. This example is equivalent to adding the --drive-shared-with-me parameter to the remote gdrive:.

rclone lsf "gdrive,shared_with_me:path/to/dir"

The major advantage to using the connection string style syntax is that it only applies to the remote, not to all the remotes of that type of the command line. A common confusion is this attempt to copy a file shared on google drive to the normal drive which does not work because the --drive-shared-with-me flag applies to both the source and the destination.

rclone copy --drive-shared-with-me gdrive:shared-file.txt gdrive:

However using the connection string syntax, this does work.

rclone copy "gdrive,shared_with_me:shared-file.txt" gdrive:

Note that the connection string only affects the options of the immediate backend. If for example gdriveCrypt is a crypt based on gdrive, then the following command will not work as intended, because shared_with_me is ignored by the crypt backend:

rclone copy "gdriveCrypt,shared_with_me:shared-file.txt" gdriveCrypt:

The connection strings have the following syntax

remote,parameter=value,parameter2=value2:path/to/dir
:backend,parameter=value,parameter2=value2:path/to/dir

If the parameter has a : or , then it must be placed in quotes " or ', so

remote,parameter="colon:value",parameter2="comma,value":path/to/dir
:backend,parameter='colon:value',parameter2='comma,value':path/to/dir

If a quoted value needs to include that quote, then it should be doubled, so

remote,parameter="with""quote",parameter2='with''quote':path/to/dir

This will make parameter be with"quote and parameter2 be with'quote.

If you leave off the =parameter then rclone will substitute =true which works very well with flags. For example, to use s3 configured in the environment you could use:

rclone lsd :s3,env_auth:

Which is equivalent to

rclone lsd :s3,env_auth=true:

Note that on the command line you might need to surround these connection strings with " or ' to stop the shell interpreting any special characters within them.

If you are a shell master then you'll know which strings are OK and which aren't, but if you aren't sure then enclose them in " and use ' as the inside quote. This syntax works on all OSes.

rclone copy ":http,url='https://example.com':path/to/dir" /tmp/dir

On Linux/macOS some characters are still interpreted inside " strings in the shell (notably \ and $ and ") so if your strings contain those you can swap the roles of " and ' thus. (This syntax does not work on Windows.)

rclone copy ':http,url="https://example.com":path/to/dir' /tmp/dir

If you supply extra configuration to a backend by command line flag, environment variable or connection string then rclone will add a suffix based on the hash of the config to the name of the remote, eg

rclone -vv lsf --s3-chunk-size 20M s3:

Has the log message

DEBUG : s3: detected overridden config - adding "{Srj1p}" suffix to name

This is so rclone can tell the modified remote apart from the unmodified remote when caching the backends.

This should only be noticeable in the logs.

This means that on the fly backends such as

rclone -vv lsf :s3,env_auth:

Will get their own names

DEBUG : :s3: detected overridden config - adding "{YTu53}" suffix to name

Remote names are case sensitive, and must adhere to the following rules: - May only contain 0-9, A-Z, a-z, _, -, . and space. - May not start with - or space.

When you are typing commands to your computer you are using something called the command line shell. This interprets various characters in an OS specific way.

Here are some gotchas which may help users unfamiliar with the shell rules

If your names have spaces or shell metacharacters (e.g. *, ?, $, ', ", etc.) then you must quote them. Use single quotes ' by default.

rclone copy 'Important files?' remote:backup

If you want to send a ' you will need to use ", e.g.

rclone copy "O'Reilly Reviews" remote:backup

The rules for quoting metacharacters are complicated and if you want the full details you'll have to consult the manual page for your shell.

If your names have spaces in you need to put them in ", e.g.

rclone copy "E:\folder name\folder name\folder name" remote:backup

If you are using the root directory on its own then don't quote it (see #464 (https://github.com/rclone/rclone/issues/464) for why), e.g.

rclone copy E:\ remote:backup

rclone uses : to mark a remote name. This is, however, a valid filename component in non-Windows OSes. The remote name parser will only search for a : up to the first / so if you need to act on a file or directory like this then use the full path starting with a /, or use ./ as a current directory prefix.

So to sync a directory called sync:me to a remote called remote: use

rclone sync -i ./sync:me remote:path

or

rclone sync -i /full/path/to/sync:me remote:path

Most remotes (but not all - see the overview (https://rclone.org/overview/#optional-features)) support server-side copy.

This means if you want to copy one folder to another then rclone won't download all the files and re-upload them; it will instruct the server to copy them in place.

Eg

rclone copy s3:oldbucket s3:newbucket

Will copy the contents of oldbucket to newbucket without downloading and re-uploading.

Remotes which don't support server-side copy will download and re-upload in this case.

Server side copies are used with sync and copy and will be identified in the log when using the -v flag. The move command may also use them if remote doesn't support server-side move directly. This is done by issuing a server-side copy then a delete which is much quicker than a download and re-upload.

Server side copies will only be attempted if the remote names are the same.

This can be used when scripting to make aged backups efficiently, e.g.

rclone sync -i remote:current-backup remote:previous-backup
rclone sync -i /path/to/files remote:current-backup

Metadata is data about a file which isn't the contents of the file. Normally rclone only preserves the modification time and the content (MIME) type where possible.

Rclone supports preserving all the available metadata on files (not directories) when using the --metadata or -M flag.

Exactly what metadata is supported and what that support means depends on the backend. Backends that support metadata have a metadata section in their docs and are listed in the features table (https://rclone.org/overview/#features) (Eg local (https://rclone.org/local/#metadata), s3)

Rclone only supports a one-time sync of metadata. This means that metadata will be synced from the source object to the destination object only when the source object has changed and needs to be re-uploaded. If the metadata subsequently changes on the source object without changing the object itself then it won't be synced to the destination object. This is in line with the way rclone syncs Content-Type without the --metadata flag.

Using --metadata when syncing from local to local will preserve file attributes such as file mode, owner, extended attributes (not Windows).

Note that arbitrary metadata may be added to objects using the --metadata-set key=value flag when the object is first uploaded. This flag can be repeated as many times as necessary.

Metadata is divided into two type. System metadata and User metadata.

Metadata which the backend uses itself is called system metadata. For example on the local backend the system metadata uid will store the user ID of the file when used on a unix based platform.

Arbitrary metadata is called user metadata and this can be set however is desired.

When objects are copied from backend to backend, they will attempt to interpret system metadata if it is supplied. Metadata may change from being user metadata to system metadata as objects are copied between different backends. For example copying an object from s3 sets the content-type metadata. In a backend which understands this (like azureblob) this will become the Content-Type of the object. In a backend which doesn't understand this (like the local backend) this will become user metadata. However should the local object be copied back to s3, the Content-Type will be set correctly.

Rclone implements a metadata framework which can read metadata from an object and write it to the object when (and only when) it is being uploaded.

This metadata is stored as a dictionary with string keys and string values.

There are some limits on the names of the keys (these may be clarified further in the future).

must be lower case
may be a-z 0-9 containing . - or _
length is backend dependent

Each backend can provide system metadata that it understands. Some backends can also store arbitrary user metadata.

Where possible the key names are standardized, so, for example, it is possible to copy object metadata from s3 to azureblob for example and metadata will be translated apropriately.

Some backends have limits on the size of the metadata and rclone will give errors on upload if they are exceeded.

The goal of the implementation is to

1.
Preserve metadata if at all possible
2.
Interpret metadata if at all possible

The consequences of 1 is that you can copy an S3 object to a local disk then back to S3 losslessly. Likewise you can copy a local file with file attributes and xattrs from local disk to s3 and back again losslessly.

The consequence of 2 is that you can copy an S3 object with metadata to Azureblob (say) and have the metadata appear on the Azureblob object also.

Here is a table of standard system metadata which, if appropriate, a backend may implement.

key description example
mode File type and mode: octal, unix style 0100664
uid User ID of owner: decimal number 500
gid Group ID of owner: decimal number 500
rdev Device ID (if special file) => hexadecimal 0
atime Time of last access: RFC 3339 2006-01-02T15:04:05.999999999Z07:00
mtime Time of last modification: RFC 3339 2006-01-02T15:04:05.999999999Z07:00
btime Time of file creation (birth): RFC 3339 2006-01-02T15:04:05.999999999Z07:00
cache-control Cache-Control header no-cache
content-disposition Content-Disposition header inline
content-encoding Content-Encoding header gzip
content-language Content-Language header en-US
content-type Content-Type header text/plain

The metadata keys mtime and content-type will take precedence if supplied in the metadata over reading the Content-Type or modification time of the source object.

Hashes are not included in system metadata as there is a well defined way of reading those already.

Rclone has a number of options to control its behaviour.

Options that take parameters can have the values passed in two ways, --option=value or --option value. However boolean (true/false) options behave slightly differently to the other options in that --boolean sets the option to true and the absence of the flag sets it to false. It is also possible to specify --boolean=false or --boolean=true. Note that --boolean false is not valid - this is parsed as --boolean and the false is parsed as an extra command line argument for rclone.

Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".

Options which use SIZE use KiB (multiples of 1024 bytes) by default. However, a suffix of B for Byte, K for KiB, M for MiB, G for GiB, T for TiB and P for PiB may be used. These are the binary units, e.g. 1, 2**10, 2**20, 2**30 respectively.

When using sync, copy or move any files which would have been overwritten or deleted are moved in their original hierarchy into this directory.

If --suffix is set, then the moved files will have the suffix added to them. If there is a file with the same path (after the suffix has been added) in DIR, then it will be overwritten.

The remote in use must support server-side move or copy and you must use the same remote as the destination of the sync. The backup directory must not overlap the destination directory without it being excluded by a filter rule.

For example

rclone sync -i /path/to/local remote:current --backup-dir remote:old

will sync /path/to/local to remote:current, but for any files which would have been updated or deleted will be stored in remote:old.

If running rclone from a script you might want to use today's date as the directory name passed to --backup-dir to store the old files, or you might want to pass --suffix with today's date.

See --compare-dest and --copy-dest.

Local address to bind to for outgoing connections. This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn't resolve or resolves to more than one IP address it will give an error.

This option controls the bandwidth limit. For example

--bwlimit 10M

would mean limit the upload and download bandwidth to 10 MiB/s. NB this is bytes per second not bits per second. To use a single limit, specify the desired bandwidth in KiB/s, or use a suffix B|K|M|G|T|P. The default is 0 which means to not limit bandwidth.

The upload and download bandwidth can be specified seperately, as --bwlimit UP:DOWN, so

--bwlimit 10M:100k

would mean limit the upload bandwidth to 10 MiB/s and the download bandwidth to 100 KiB/s. Either limit can be "off" meaning no limit, so to just limit the upload bandwidth you would use

--bwlimit 10M:off

this would limit the upload bandwidth to 10 MiB/s but the download bandwidth would be unlimited.

When specified as above the bandwidth limits last for the duration of run of the rclone binary.

It is also possible to specify a "timetable" of limits, which will cause certain limits to be applied at certain times. To specify a timetable, format your entries as WEEKDAY-HH:MM,BANDWIDTH WEEKDAY-HH:MM,BANDWIDTH... where: WEEKDAY is optional element.

BANDWIDTH can be a single number, e.g.100k or a pair of numbers for upload:download, e.g.10M:1M.
WEEKDAY can be written as the whole word or only using the first 3 characters. It is optional.
HH:MM is an hour from 00:00 to 23:59.

An example of a typical timetable to avoid link saturation during daytime working hours could be:

--bwlimit "08:00,512k 12:00,10M 13:00,512k 18:00,30M 23:00,off"

In this example, the transfer bandwidth will be set to 512 KiB/s at 8am every day. At noon, it will rise to 10 MiB/s, and drop back to 512 KiB/sec at 1pm. At 6pm, the bandwidth limit will be set to 30 MiB/s, and at 11pm it will be completely disabled (full speed). Anything between 11pm and 8am will remain unlimited.

An example of timetable with WEEKDAY could be:

--bwlimit "Mon-00:00,512 Fri-23:59,10M Sat-10:00,1M Sun-20:00,off"

It means that, the transfer bandwidth will be set to 512 KiB/s on Monday. It will rise to 10 MiB/s before the end of Friday. At 10:00 on Saturday it will be set to 1 MiB/s. From 20:00 on Sunday it will be unlimited.

Timeslots without WEEKDAY are extended to the whole week. So this example:

--bwlimit "Mon-00:00,512 12:00,1M Sun-20:00,off"

Is equivalent to this:

--bwlimit "Mon-00:00,512Mon-12:00,1M Tue-12:00,1M Wed-12:00,1M Thu-12:00,1M Fri-12:00,1M Sat-12:00,1M Sun-12:00,1M Sun-20:00,off"

Bandwidth limit apply to the data transfer for all backends. For most backends the directory listing bandwidth is also included (exceptions being the non HTTP backends, ftp, sftp and storj).

Note that the units are Byte/s, not bit/s. Typically connections are measured in bit/s - to convert divide by 8. For example, let's say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625 MiB/s so you would use a --bwlimit 0.625M parameter for rclone.

On Unix systems (Linux, macOS, ...) the bandwidth limiter can be toggled by sending a SIGUSR2 signal to rclone. This allows to remove the limitations of a long running rclone transfer and to restore it back to the value specified with --bwlimit quickly when needed. Assuming there is only one rclone instance running, you can toggle the limiter like this:

kill -SIGUSR2 $(pidof rclone)

If you configure rclone with a remote control then you can use change the bwlimit dynamically:

rclone rc core/bwlimit rate=1M

This option controls per file bandwidth limit. For the options see the --bwlimit flag.

For example use this to allow no transfers to be faster than 1 MiB/s

--bwlimit-file 1M

This can be used in conjunction with --bwlimit.

Note that if a schedule is provided the file will use the schedule in effect at the start of the transfer.

Use this sized buffer to speed up file transfers. Each --transfer will use this much memory for buffering.

When using mount or cmount each open file descriptor will use this much memory for buffering. See the mount (https://rclone.org/commands/rclone_mount/#file-buffering) documentation for more details.

Set to 0 to disable the buffering for the minimum memory usage.

Note that the memory allocation of the buffers is influenced by the --use-mmap flag.

Specify the directory rclone will use for caching, to override the default.

Default value is depending on operating system: - Windows %LocalAppData%\rclone, if LocalAppData is defined. - macOS $HOME/Library/Caches/rclone if HOME is defined. - Unix $XDG_CACHE_HOME/rclone if XDG_CACHE_HOME is defined, else $HOME/.cache/rclone if HOME is defined. - Fallback (on all OS) to $TMPDIR/rclone, where TMPDIR is the value from --temp-dir.

You can use the config paths (https://rclone.org/commands/rclone_config_paths/) command to see the current value.

Cache directory is heavily used by the VFS File Caching (https://rclone.org/commands/rclone_mount/#vfs-file-caching) mount feature, but also by serve (https://rclone.org/commands/rclone_serve/), GUI and other parts of rclone.

If this flag is set then in a sync, copy or move, rclone will do all the checks to see whether files need to be transferred before doing any of the transfers. Normally rclone would start running transfers as soon as possible.

This flag can be useful on IO limited systems where transfers interfere with checking.

It can also be useful to ensure perfect ordering when using --order-by.

Using this flag can use more memory as it effectively sets --max-backlog to infinite. This means that all the info on the objects to transfer is held in memory before the transfers start.

Originally controlling just the number of file checkers to run in parallel, e.g. by rclone copy. Now a fairly universal parallelism control used by rclone in several places.

Note: checkers do the equality checking of files during a sync. For some storage systems (e.g. S3, Swift, Dropbox) this can take a significant amount of time so they are run in parallel.

The default is to run 8 checkers in parallel. However, in case of slow-reacting backends you may need to lower (rather than increase) this default by setting --checkers to 4 or less threads. This is especially advised if you are experiencing backend server crashes during file checking phase (e.g. on subsequent or top-up backups where little or no file copying is done and checking takes up most of the time). Increase this setting only with utmost care, while monitoring your server health and file checking throughput.

Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check the file hash and size to determine if files are equal.

This is useful when the remote doesn't support setting modified time and a more accurate sync is desired than just checking the file size.

This is very useful when transferring between remotes which store the same hash type on the object, e.g. Drive and Swift. For details of which remotes support which hash type see the table in the overview section (https://rclone.org/overview/).

Eg rclone --checksum sync s3:/bucket swift:/bucket would run much quicker than without the --checksum flag.

When using this flag, rclone won't update mtimes of remote files if they are incorrect as it would normally.

When using sync, copy or move DIR is checked in addition to the destination for files. If a file identical to the source is found that file is NOT copied from source. This is useful to copy just files that have changed since the last backup.

You must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory.

See --copy-dest and --backup-dir.

Specify the location of the rclone configuration file, to override the default. E.g. rclone config --config="rclone.conf".

The exact default is a bit complex to describe, due to changes introduced through different versions of rclone while preserving backwards compatibility, but in most cases it is as simple as:

%APPDATA%/rclone/rclone.conf on Windows
~/.config/rclone/rclone.conf on other

The complete logic is as follows: Rclone will look for an existing configuration file in any of the following locations, in priority order:

1.
rclone.conf (in program directory, where rclone executable is)
2.
%APPDATA%/rclone/rclone.conf (only on Windows)
3.
$XDG_CONFIG_HOME/rclone/rclone.conf (on all systems, including Windows)
4.
~/.config/rclone/rclone.conf (see below for explanation of ~ symbol)
5.
~/.rclone.conf

If no existing configuration file is found, then a new one will be created in the following location:

On Windows: Location 2 listed above, except in the unlikely event that APPDATA is not defined, then location 4 is used instead.
On Unix: Location 3 if XDG_CONFIG_HOME is defined, else location 4.
Fallback to location 5 (on all OS), when the rclone directory cannot be created, but if also a home directory was not found then path .rclone.conf relative to current working directory will be used as a final resort.

The ~ symbol in paths above represent the home directory of the current user on any OS, and the value is defined as following:

On Windows: %HOME% if defined, else %USERPROFILE%, or else %HOMEDRIVE%\%HOMEPATH%.
On Unix: $HOME if defined, else by looking up current user in OS-specific user database (e.g. passwd file), or else use the result from shell command cd && pwd.

If you run rclone config file you will see where the default location is for you.

The fact that an existing file rclone.conf in the same directory as the rclone executable is always preferred, means that it is easy to run in "portable" mode by downloading rclone executable to a writable directory and then create an empty file rclone.conf in the same directory.

If the location is set to empty string "" or path to a file with name notfound, or the os null device represented by value NUL on Windows and /dev/null on Unix systems, then rclone will keep the config file in memory only.

The file format is basic INI (https://en.wikipedia.org/wiki/INI_file#Format): Sections of text, led by a [section] header and followed by key=value entries on separate lines. In rclone each remote is represented by its own section, where the section name defines the name of the remote. Options are specified as the key=value entries, where the key is the option name without the --backend- prefix, in lowercase and with _ instead of -. E.g. option --mega-hard-delete corresponds to key hard_delete. Only backend options can be specified. A special, and required, key type identifies the storage system (https://rclone.org/overview/), where the value is the internal lowercase name as returned by command rclone help backends. Comments are indicated by ; or # at the beginning of a line.

Example:

[megaremote]
type = mega
user = you@example.com
pass = PDPcQVVjVtzFY-GTdDFozqBhTdsPg3qH

Note that passwords are in obscured (https://rclone.org/commands/rclone_obscure/) form. Also, many storage systems uses token-based authentication instead of passwords, and this requires additional steps. It is easier, and safer, to use the interactive command rclone config instead of manually editing the configuration file.

The configuration file will typically contain login information, and should therefore have restricted permissions so that only the current user can read it. Rclone tries to ensure this when it writes the file. You may also choose to encrypt the file.

When token-based authentication are used, the configuration file must be writable, because rclone needs to update the tokens inside it.

Set the connection timeout. This should be in go time format which looks like 5s for 5 seconds, 10m for 10 minutes, or 3h30m.

The connection timeout is the amount of time rclone will wait for a connection to go through to a remote object storage system. It is 1m by default.

When using sync, copy or move DIR is checked in addition to the destination for files. If a file identical to the source is found that file is server-side copied from DIR to the destination. This is useful for incremental backup.

The remote in use must support server-side copy and you must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory.

See --compare-dest and --backup-dir.

Mode to run dedupe command in. One of interactive, skip, first, newest, oldest, rename. The default is interactive.

See the dedupe command for more information as to what these options mean.

This disables a comma separated list of optional features. For example to disable server-side move and server-side copy use:

--disable move,copy

The features can be put in any case.

To see a list of which features can be disabled use:

--disable help

See the overview features (https://rclone.org/overview/#features) and optional features (https://rclone.org/overview/#optional-features) to get an idea of which feature does what.

This flag can be useful for debugging and in exceptional circumstances (e.g. Google Drive limiting the total volume of Server Side Copies to 100 GiB/day).

This stops rclone from trying to use HTTP/2 if available. This can sometimes speed up transfers due to a problem in the Go standard library (https://github.com/golang/go/issues/37373).

Specify a DSCP value or name to use in connections. This could help QoS system to identify traffic class. BE, EF, DF, LE, CSx and AFxx are allowed.

See the description of differentiated services (https://en.wikipedia.org/wiki/Differentiated_services) to get an idea of this field. Setting this to 1 (LE) to identify the flow to SCAVENGER class can avoid occupying too much bandwidth in a network with DiffServ support (RFC 8622 (https://tools.ietf.org/html/rfc8622)).

For example, if you configured QoS on router to handle LE properly. Running:

rclone copy --dscp LE from:/from to:/to

would make the priority lower than usual internet flows.

This option has no effect on Windows (see golang/go#42728 (https://github.com/golang/go/issues/42728)).

Do a trial run with no permanent changes. Use this to see what rclone would do without actually doing it. Useful when setting up the sync command which deletes files in the destination.

This specifies the amount of time to wait for a server's first response headers after fully writing the request headers if the request has an "Expect: 100-continue" header. Not all backends support using this.

Zero means no timeout and causes the body to be sent immediately, without waiting for the server to approve. This time does not include the time to send the request header.

The default is 1s. Set to 0 to disable.

By default, rclone will exit with return code 0 if there were no errors.

This option allows rclone to return exit code 9 if no files were transferred between the source and destination. This allows using rclone in scripts, and triggering follow-on actions if data was copied, or skipping if not.

NB: Enabling this option turns a usually non-fatal error into a potentially fatal one - please check and adjust your scripts accordingly!

When using rclone via the API rclone caches created remotes for 5 minutes by default in the "fs cache". This means that if you do repeated actions on the same remote then rclone won't have to build it again from scratch, which makes it more efficient.

This flag sets the time that the remotes are cached for. If you set it to 0 (or negative) then rclone won't cache the remotes at all.

Note that if you use some flags, eg --backup-dir and if this is set to 0 rclone may build two remotes (one for the source or destination and one for the --backup-dir where it may have only built one before.

This controls how often rclone checks for cached remotes to expire. See the --fs-cache-expire-duration documentation above for more info. The default is 60s, set to 0 to disable expiry.

Add an HTTP header for all transactions. The flag can be repeated to add multiple headers.

If you want to add headers only for uploads use --header-upload and if you want to add headers only for downloads use --header-download.

This flag is supported for all HTTP based backends even those not supported by --header-upload and --header-download so may be used as a workaround for those with care.

rclone ls remote:test --header "X-Rclone: Foo" --header "X-LetMeIn: Yes"

Add an HTTP header for all download transactions. The flag can be repeated to add multiple headers.

rclone sync -i s3:test/src ~/dst --header-download "X-Amz-Meta-Test: Foo" --header-download "X-Amz-Meta-Test2: Bar"

See the GitHub issue here (https://github.com/rclone/rclone/issues/59) for currently supported backends.

Add an HTTP header for all upload transactions. The flag can be repeated to add multiple headers.

rclone sync -i ~/src s3:test/dst --header-upload "Content-Disposition: attachment; filename='cool.html'" --header-upload "X-Amz-Meta-Test: FooBar"

See the GitHub issue here (https://github.com/rclone/rclone/issues/59) for currently supported backends.

Rclone commands output values for sizes (e.g. number of bytes) and counts (e.g. number of files) either as raw numbers, or in human-readable format.

In human-readable format the values are scaled to larger units, indicated with a suffix shown after the value, and rounded to three decimals. Rclone consistently uses binary units (powers of 2) for sizes and decimal units (powers of 10) for counts. The unit prefix for size is according to IEC standard notation, e.g. Ki for kibi. Used with byte unit, 1 KiB means 1024 Byte. In list type of output, only the unit prefix appended to the value (e.g. 9.762Ki), while in more textual output the full unit is shown (e.g. 9.762 KiB). For counts the SI standard notation is used, e.g. prefix k for kilo. Used with file counts, 1k means 1000 files.

The various list (https://rclone.org/commands/rclone_ls/) commands output raw numbers by default. Option --human-readable will make them output values in human-readable format instead (with the short unit prefix).

The about (https://rclone.org/commands/rclone_about/) command outputs human-readable by default, with a command-specific option --full to output the raw numbers instead.

Command size (https://rclone.org/commands/rclone_size/) outputs both human-readable and raw numbers in the same output.

The tree (https://rclone.org/commands/rclone_tree/) command also considers --human-readable, but it will not use the exact same notation as the other commands: It rounds to one decimal, and uses single letter suffix, e.g. K instead of Ki. The reason for this is that it relies on an external library.

The interactive command ncdu (https://rclone.org/commands/rclone_ncdu/) shows human-readable by default, and responds to key u for toggling human-readable format.

Using this option will cause rclone to ignore the case of the files when synchronizing so files will not be copied/synced when the existing filenames are the same, even if the casing is different.

Normally rclone will check that the checksums of transferred files match, and give an error "corrupted on transfer" if they don't.

You can use this option to skip that check. You should only use it if you have had the "corrupted on transfer" error message and you are sure you might want to transfer potentially corrupted data.

Using this option will make rclone unconditionally skip all files that exist on the destination, no matter the content of these files.

While this isn't a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted.

When performing a move/moveto command, this flag will leave skipped files in the source location unchanged when a file with the same name exists on the destination.

Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the modification time. If --checksum is set then it only checks the checksum.

It will also cause rclone to skip verifying the sizes are the same after transfer.

This can be useful for transferring files to and from OneDrive which occasionally misreports the size of image files (see #399 (https://github.com/rclone/rclone/issues/399) for more info).

Using this option will cause rclone to unconditionally upload all files regardless of the state of files on the destination.

Normally rclone would skip any files that have the same modification time and are the same size (or have the same checksum if using --checksum).

Treat source and destination files as immutable and disallow modification.

With this option set, files will be created and deleted as requested, but existing files will never be updated. If an existing file does not match between the source and destination, rclone will give the error Source and destination exist but do not match: immutable file modified.

Note that only commands which transfer files (e.g. sync, copy, move) are affected by this behavior, and only modification is disallowed. Files may still be deleted explicitly (e.g. delete, purge) or implicitly (e.g. sync, move). Use copy --immutable if it is desired to avoid deletion as well as modification.

This can be useful as an additional layer of protection for immutable or append-only data sets (notably backup archives), where modification implies corruption and should not be propagated.

This flag can be used to tell rclone that you wish a manual confirmation before destructive operations.

It is recommended that you use this flag while learning rclone especially with rclone sync.

For example

$ rclone delete -i /tmp/dir
rclone: delete "important-file.txt"?
y) Yes, this is OK (default)
n) No, skip this
s) Skip all delete operations with no more questions
!) Do all delete operations with no more questions
q) Exit rclone now.
y/n/s/!/q> n

The options mean

y: Yes, this operation should go ahead. You can also press Return for this to happen. You'll be asked every time unless you choose s or !.
n: No, do not do this operation. You'll be asked every time unless you choose s or !.
s: Skip all the following operations of this type with no more questions. This takes effect until rclone exits. If there are any different kind of operations you'll be prompted for them.
!: Do all the following operations with no more questions. Useful if you've decided that you don't mind rclone doing that kind of operation. This takes effect until rclone exits . If there are any different kind of operations you'll be prompted for them.
q: Quit rclone now, just in case!

During rmdirs it will not remove root directory, even if it's empty.

Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v flag. See the Logging section for more info.

If FILE exists then rclone will append to it.

Note that if you are using the logrotate program to manage rclone's logs, then you should use the copytruncate option as rclone doesn't have a signal to rotate logs.

Comma separated list of log format options. Accepted options are date, time, microseconds, pid, longfile, shortfile, UTC. Any other keywords will be silently ignored. pid will tag log messages with process identifier which useful with rclone mount --daemon. Other accepted options are explained in the go documentation (https://pkg.go.dev/log#pkg-constants). The default log format is "date,time".

This sets the log level for rclone. The default log level is NOTICE.

DEBUG is equivalent to -vv. It outputs lots of debug info - useful for bug reports and really finding out what rclone is doing.

INFO is equivalent to -v. It outputs information about each transfer and prints stats once a minute by default.

NOTICE is the default log level if no logging flags are supplied. It outputs very little when things are working normally. It outputs warnings and significant events.

ERROR is equivalent to -q. It only outputs error messages.

This switches the log format to JSON for rclone. The fields of json log are level, msg, source, time.

This controls the number of low level retries rclone does.

A low level retry is used to retry a failing operation - typically one HTTP request. This might be uploading a chunk of a big file for example. You will see low level retries in the log with the -v flag.

This shouldn't need to be changed from the default in normal operations. However, if you get a lot of low level retries you may wish to reduce the value so rclone moves on to a high level retry (see the --retries flag) quicker.

Disable low level retries with --low-level-retries 1.

This is the maximum allowable backlog of files in a sync/copy/move queued for being checked or transferred.

This can be set arbitrarily large. It will only use memory when the queue is in use. Note that it will use in the order of N KiB of memory when the backlog is in use.

Setting this large allows rclone to calculate how many files are pending more accurately, give a more accurate estimated finish time and make --order-by work more accurately.

Setting this small will make rclone more synchronous to the listings of the remote which may be desirable.

Setting this to a negative number will make the backlog as large as possible.

This tells rclone not to delete more than N files. If that limit is exceeded then a fatal error will be generated and rclone will stop the operation in progress.

This modifies the recursion depth for all the commands except purge.

So if you do rclone --max-depth 1 ls remote:path you will see only the files in the top level directory. Using --max-depth 2 means you will see all the files in first two directory levels and so on.

For historical reasons the lsd command defaults to using a --max-depth of 1 - you can override this with the command line flag.

You can use this command to disable recursion (with --max-depth 1).

Note that if you use this with sync and --delete-excluded the files not recursed through are considered excluded and will be deleted on the destination. Test first with --dry-run if you are not sure what will happen.

Rclone will stop scheduling new transfers when it has run for the duration specified.

Defaults to off.

When the limit is reached any existing transfers will complete.

Rclone won't exit with an error if the transfer limit is reached.

Rclone will stop transferring when it has reached the size specified. Defaults to off.

When the limit is reached all transfers will stop immediately.

Rclone will exit with exit code 8 if the transfer limit is reached.

Setting this flag enables rclone to copy the metadata from the source to the destination. For local backends this is ownership, permissions, xattr etc. See the #metadata for more info.

Add metadata key = value when uploading. This can be repeated as many times as required. See the #metadata for more info.

This modifies the behavior of --max-transfer Defaults to --cutoff-mode=hard.

Specifying --cutoff-mode=hard will stop transferring immediately when Rclone reaches the limit.

Specifying --cutoff-mode=soft will stop starting new transfers when Rclone reaches the limit.

Specifying --cutoff-mode=cautious will try to prevent Rclone from reaching the limit.

When checking whether a file has been modified, this is the maximum allowed time difference that a file can have and still be considered equivalent.

The default is 1ns unless this is overridden by a remote. For example OS X only stores modification times to the nearest second so if you are reading and writing to an OS X filing system this will be 1s by default.

This command line flag allows you to override that computed default.

When downloading files to the local backend above this size, rclone will use multiple threads to download the file (default 250M).

Rclone preallocates the file (using fallocate(FALLOC_FL_KEEP_SIZE) on unix or NTSetInformationFile on Windows both of which takes no time) then each thread writes directly into the file at the correct place. This means that rclone won't create fragmented or sparse files and there won't be any assembly time at the end of the transfer.

The number of threads used to download is controlled by --multi-thread-streams.

Use -vv if you wish to see info about the threads.

This will work with the sync/copy/move commands and friends copyto/moveto. Multi thread downloads will be used with rclone mount and rclone serve if --vfs-cache-mode is set to writes or above.

NB that this only works for a local destination but will work with any source.

NB that multi thread copies are disabled for local to local copies as they are faster without unless --multi-thread-streams is set explicitly.

NB on Windows using multi-thread downloads will cause the resulting files to be sparse (https://en.wikipedia.org/wiki/Sparse_file). Use --local-no-sparse to disable sparse files (which may cause long delays at the start of downloads) or disable multi-thread downloads with --multi-thread-streams 0

When using multi thread downloads (see above --multi-thread-cutoff) this sets the maximum number of streams to use. Set to 0 to disable multi thread downloads (Default 4).

Exactly how many streams rclone uses for the download depends on the size of the file. To calculate the number of download streams Rclone divides the size of the file by the --multi-thread-cutoff and rounds up, up to the maximum set with --multi-thread-streams.

So if --multi-thread-cutoff 250M and --multi-thread-streams 4 are in effect (the defaults):

0..250 MiB files will be downloaded with 1 stream
250..500 MiB files will be downloaded with 2 streams
500..750 MiB files will be downloaded with 3 streams
750+ MiB files will be downloaded with 4 streams

The --no-check-dest can be used with move or copy and it causes rclone not to check the destination at all when copying files.

This means that:

the destination is not listed minimising the API calls
files are always transferred
this can cause duplicates on remotes which allow it (e.g. Google Drive)
--retries 1 is recommended otherwise you'll transfer everything again on a retry

This flag is useful to minimise the transactions if you know that none of the files are on the destination.

This is a specialized flag which should be ignored by most users!

Don't set Accept-Encoding: gzip. This means that rclone won't ask the server for compressed files automatically. Useful if you've set the server to return files with Content-Encoding: gzip but you uploaded compressed files.

There is no need to set this in normal operation, and doing so will decrease the network transfer efficiency of rclone.

The --no-traverse flag controls whether the destination file system is traversed when using the copy or move commands. --no-traverse is not compatible with sync and will be ignored if you supply it with sync.

If you are only copying a small number of files (or are filtering most of the files) and/or have a large number of files on the destination then --no-traverse will stop rclone listing the destination and save time.

However, if you are copying a large number of files, especially if you are doing a copy where lots of the files under consideration haven't changed and won't need copying then you shouldn't use --no-traverse.

See rclone copy (https://rclone.org/commands/rclone_copy/) for an example of how to use it.

Don't normalize unicode characters in filenames during the sync routine.

Sometimes, an operating system will store filenames containing unicode parts in their decomposed form (particularly macOS). Some cloud storage systems will then recompose the unicode, resulting in duplicate files if the data is ever copied back to a local filesystem.

Using this flag will disable that functionality, treating each unicode character as unique. For example, by default é and é will be normalized into the same character. With --no-unicode-normalization they will be treated as unique characters.

When using this flag, rclone won't update modification times of remote files if they are incorrect as it would normally.

This can be used if the remote is being synced with another tool also (e.g. the Google Drive client).

The --order-by flag controls the order in which files in the backlog are processed in rclone sync, rclone copy and rclone move.

The order by string is constructed like this. The first part describes what aspect is being measured:

size - order by the size of the files
name - order by the full path of the files
modtime - order by the modification date of the files

This can have a modifier appended with a comma:

ascending or asc - order so that the smallest (or oldest) is processed first
descending or desc - order so that the largest (or newest) is processed first
mixed - order so that the smallest is processed first for some threads and the largest for others

If the modifier is mixed then it can have an optional percentage (which defaults to 50), e.g. size,mixed,25 which means that 25% of the threads should be taking the smallest items and 75% the largest. The threads which take the smallest first will always take the smallest first and likewise the largest first threads. The mixed mode can be useful to minimise the transfer time when you are transferring a mixture of large and small files - the large files are guaranteed upload threads and bandwidth and the small files will be processed continuously.

If no modifier is supplied then the order is ascending.

For example

--order-by size,desc - send the largest files first
--order-by modtime,ascending - send the oldest files first
--order-by name - send the files with alphabetically by path first

If the --order-by flag is not supplied or it is supplied with an empty string then the default ordering will be used which is as scanned. With --checkers 1 this is mostly alphabetical, however with the default --checkers 8 it is somewhat random.

The --order-by flag does not do a separate pass over the data. This means that it may transfer some files out of the order specified if

there are no files in the backlog or the source has not been fully scanned yet
there are more than --max-backlog files in the backlog

Rclone will do its best to transfer the best file it has so in practice this should not cause a problem. Think of --order-by as being more of a best efforts flag rather than a perfect ordering.

If you want perfect ordering then you will need to specify --check-first which will find all the files which need transferring first before transferring any.

This flag supplies a program which should supply the config password when run. This is an alternative to rclone prompting for the password or setting the RCLONE_CONFIG_PASS variable.

The argument to this should be a command with a space separated list of arguments. If one of the arguments has a space in then enclose it in ", if you want a literal " in an argument then enclose the argument in " and double the ". See CSV encoding (https://godoc.org/encoding/csv) for more info.

Eg

--password-command echo hello
--password-command echo "hello with space"
--password-command echo "hello with ""quotes"" and space"

See the Configuration Encryption for more info.

See a Windows PowerShell example on the Wiki (https://github.com/rclone/rclone/wiki/Windows-Powershell-use-rclone-password-command-for-Config-file-password).

This flag makes rclone update the stats in a static block in the terminal providing a realtime overview of the transfer.

Any log messages will scroll above the static block. Log messages will push the static block down to the bottom of the terminal where it will stay.

Normally this is updated every 500mS but this period can be overridden with the --stats flag.

This can be used with the --stats-one-line flag for a simpler display.

Note: On Windows until this bug (https://github.com/Azure/go-ansiterm/issues/26) is fixed all non-ASCII characters will be replaced with . when --progress is in use.

This flag, when used with -P/--progress, will print the string ETA: %s to the terminal title.

This flag will limit rclone's output to error messages only.

The --refresh-times flag can be used to update modification times of existing files when they are out of sync on backends which don't support hashes.

This is useful if you uploaded files with the incorrect timestamps and you now wish to correct them.

This flag is only useful for destinations which don't support hashes (e.g. crypt).

This can be used any of the sync commands sync, copy or move.

To use this flag you will need to be doing a modification time sync (so not using --size-only or --checksum). The flag will have no effect when using --size-only or --checksum.

If this flag is used when rclone comes to upload a file it will check to see if there is an existing file on the destination. If this file matches the source with size (and checksum if available) but has a differing timestamp then instead of re-uploading it, rclone will update the timestamp on the destination file. If the checksum does not match rclone will upload the new file. If the checksum is absent (e.g. on a crypt backend) then rclone will update the timestamp.

Note that some remotes can't set the modification time without re-uploading the file so this flag is less useful on them.

Normally if you are doing a modification time sync rclone will update modification times without --refresh-times provided that the remote supports checksums and the checksums match on the file. However if the checksums are absent then rclone will upload the file rather than setting the timestamp as this is the safe behaviour.

Retry the entire sync if it fails this many times it fails (default 3).

Some remotes can be unreliable and a few retries help pick up the files which didn't get transferred because of errors.

Disable retries with --retries 1.

This sets the interval between each retry specified by --retries

The default is 0. Use 0 to disable.

Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the size.

This can be useful transferring files from Dropbox which have been modified by the desktop sync client which doesn't set checksums of modification times in the same way as rclone.

Commands which transfer data (sync, copy, copyto, move, moveto) will print data transfer stats at regular intervals to show their progress.

This sets the interval.

The default is 1m. Use 0 to disable.

If you set the stats interval then all commands can show stats. This can be useful when running other commands, check or mount for example.

Stats are logged at INFO level by default which means they won't show at default log level NOTICE. Use --stats-log-level NOTICE or -v to make them show. See the Logging section for more info on log levels.

Note that on macOS you can send a SIGINFO (which is normally ctrl-T in the terminal) to make the stats print immediately.

By default, the --stats output will truncate file names and paths longer than 40 characters. This is equivalent to providing --stats-file-name-length 40. Use --stats-file-name-length 0 to disable any truncation of file names printed by stats.

Log level to show --stats output at. This can be DEBUG, INFO, NOTICE, or ERROR. The default is INFO. This means at the default level of logging which is NOTICE the stats won't show - if you want them to then use --stats-log-level NOTICE. See the Logging section for more info on log levels.

When this is specified, rclone condenses the stats into a single line showing the most important stats only.

When this is specified, rclone enables the single-line stats and prepends the display with a date string. The default is 2006/01/02 15:04:05 -

When this is specified, rclone enables the single-line stats and prepends the display with a user-supplied date string. The date string MUST be enclosed in quotes. Follow golang specs (https://golang.org/pkg/time/#Time.Format) for date formatting syntax.

By default, data transfer rates will be printed in bytes per second.

This option allows the data rate to be printed in bits per second.

Data transfer volume will still be reported in bytes.

The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals 1,048,576 bit/s and not 1,000,000 bit/s.

The default is bytes.

When using sync, copy or move any files which would have been overwritten or deleted will have the suffix added to them. If there is a file with the same path (after the suffix has been added), then it will be overwritten.

The remote in use must support server-side move or copy and you must use the same remote as the destination of the sync.

This is for use with files to add the suffix in the current directory or with --backup-dir. See --backup-dir for more info.

For example

rclone copy -i /path/to/local/file remote:current --suffix .bak

will copy /path/to/local to remote:current, but for any files which would have been updated or deleted have .bak added.

If using rclone sync with --suffix and without --backup-dir then it is recommended to put a filter rule in excluding the suffix otherwise the sync will delete the backup files.

rclone sync -i /path/to/local/file remote:current --suffix .bak --exclude "*.bak"

When using --suffix, setting this causes rclone put the SUFFIX before the extension of the files that it backs up rather than after.

So let's say we had --suffix -2019-01-01, without the flag file.txt would be backed up to file.txt-2019-01-01 and with the flag it would be backed up to file-2019-01-01.txt. This can be helpful to make sure the suffixed files can still be opened.

On capable OSes (not Windows or Plan9) send all log output to syslog.

This can be useful for running rclone in a script or rclone mount.

If using --syslog this sets the syslog facility (e.g. KERN, USER). See man syslog for a list of possible facilities. The default facility is DAEMON.

Specify the directory rclone will use for temporary files, to override the default. Make sure the directory exists and have accessible permissions.

By default the operating system's temp directory will be used: - On Unix systems, $TMPDIR if non-empty, else /tmp. - On Windows, the first non-empty value from %TMP%, %TEMP%, %USERPROFILE%, or the Windows directory.

When overriding the default with this option, the specified path will be set as value of environment variable TMPDIR on Unix systems and TMP and TEMP on Windows.

You can use the config paths (https://rclone.org/commands/rclone_config_paths/) command to see the current value.

Limit transactions per second to this number. Default is 0 which is used to mean unlimited transactions per second.

A transaction is roughly defined as an API call; its exact meaning will depend on the backend. For HTTP based backends it is an HTTP PUT/GET/POST/etc and its response. For FTP/SFTP it is a round trip transaction over TCP.

For example, to limit rclone to 10 transactions per second use --tpslimit 10, or to 1 transaction every 2 seconds use --tpslimit 0.5.

Use this when the number of transactions per second from rclone is causing a problem with the cloud storage provider (e.g. getting you banned or rate limited).

This can be very useful for rclone mount to control the behaviour of applications using it.

This limit applies to all HTTP based backends and to the FTP and SFTP backends. It does not apply to the local backend or the Storj backend.

See also --tpslimit-burst.

Max burst of transactions for --tpslimit (default 1).

Normally --tpslimit will do exactly the number of transaction per second specified. However if you supply --tps-burst then rclone can save up some transactions from when it was idle giving a burst of up to the parameter supplied.

For example if you provide --tpslimit-burst 10 then if rclone has been idle for more than 10*--tpslimit then it can do 10 transactions very quickly before they are limited again.

This may be used to increase performance of --tpslimit without changing the long term average number of transactions per second.

By default, rclone doesn't keep track of renamed files, so if you rename a file locally then sync it to a remote, rclone will delete the old file on the remote and upload a new copy.

If you use this flag, and the remote supports server-side copy or server-side move, and the source and destination have a compatible hash, then this will track renames during sync operations and perform renaming server-side.

Files will be matched by size and hash - if both match then a rename will be considered.

If the destination does not support server-side copy or move, rclone will fall back to the default behaviour and log an error level message to the console.

Encrypted destinations are not currently supported by --track-renames if --track-renames-strategy includes hash.

Note that --track-renames is incompatible with --no-traverse and that it uses extra memory to keep track of all the rename candidates.

Note also that --track-renames is incompatible with --delete-before and will select --delete-after instead of --delete-during.

This option changes the matching criteria for --track-renames.

The matching is controlled by a comma separated selection of these tokens:

modtime - the modification time of the file - not supported on all backends
hash - the hash of the file contents - not supported on all backends
leaf - the name of the file not including its directory name
size - the size of the file (this is always enabled)

So using --track-renames-strategy modtime,leaf would match files based on modification time, the leaf of the file name and the size only.

Using --track-renames-strategy modtime or leaf can enable --track-renames support for encrypted destinations.

If nothing is specified, the default option is matching by hashes.

Note that the hash strategy is not supported with encrypted destinations.

This option allows you to specify when files on your destination are deleted when you sync folders.

Specifying the value --delete-before will delete all files present on the destination, but not on the source before starting the transfer of any new or updated files. This uses two passes through the file systems, one for the deletions and one for the copies.

Specifying --delete-during will delete files while checking and uploading files. This is the fastest option and uses the least memory.

Specifying --delete-after (the default value) will delay deletion of files until all new/updated files have been successfully transferred. The files to be deleted are collected in the copy pass then deleted after the copy pass has completed successfully. The files to be deleted are held in memory so this mode may use more memory. This is the safest mode as it will only delete files if there have been no errors subsequent to that. If there have been errors before the deletions start then you will get the message not deleting files as there were IO errors.

When doing anything which involves a directory listing (e.g. sync, copy, ls - in fact nearly every command), rclone normally lists a directory and processes it before using more directory lists to process any subdirectories. This can be parallelised and works very quickly using the least amount of memory.

However, some remotes have a way of listing all files beneath a directory in one (or a small number) of transactions. These tend to be the bucket-based remotes (e.g. S3, B2, GCS, Swift, Hubic).

If you use the --fast-list flag then rclone will use this method for listing directories. This will have the following consequences for the listing:

It will use fewer transactions (important if you pay for them)
It will use more memory. Rclone has to load the whole listing into memory.
It may be faster because it uses fewer transactions
It may be slower because it can't be parallelized

rclone should always give identical results with and without --fast-list.

If you pay for transactions and can fit your entire sync listing into memory then --fast-list is recommended. If you have a very big sync to do then don't use --fast-list otherwise you will run out of memory.

If you use --fast-list on a remote which doesn't support it, then rclone will just ignore it.

This sets the IO idle timeout. If a transfer has started but then becomes idle for this long it is considered broken and disconnected.

The default is 5m. Set to 0 to disable.

The number of file transfers to run in parallel. It can sometimes be useful to set this to a smaller number if the remote is giving a lot of timeouts or bigger if you have lots of bandwidth and a fast remote.

The default is to run 4 file transfers in parallel.

Look at --multi-thread-streams if you would like to control single file transfers.

This forces rclone to skip any files which exist on the destination and have a modified time that is newer than the source file.

This can be useful in avoiding needless transfers when transferring to a remote which doesn't support modification times directly (or when using --use-server-modtime to avoid extra API calls) as it is more accurate than a --size-only check and faster than using --checksum. On such remotes (or when using --use-server-modtime) the time checked will be the uploaded time.

If an existing destination file has a modification time older than the source file's, it will be updated if the sizes are different. If the sizes are the same, it will be updated if the checksum is different or not available.

If an existing destination file has a modification time equal (within the computed modify window) to the source file's, it will be updated if the sizes are different. The checksum will not be checked in this case unless the --checksum flag is provided.

In all other cases the file will not be updated.

Consider using the --modify-window flag to compensate for time skews between the source and the backend, for backends that do not support mod times, and instead use uploaded times. However, if the backend does not support checksums, note that sync'ing or copying within the time skew window may still result in additional transfers for safety.

If this flag is set then rclone will use anonymous memory allocated by mmap on Unix based platforms and VirtualAlloc on Windows for its transfer buffers (size controlled by --buffer-size). Memory allocated like this does not go on the Go heap and can be returned to the OS immediately when it is finished with.

If this flag is not set then rclone will allocate and free the buffers using the Go memory allocator which may use more memory as memory pages are returned less aggressively to the OS.

It is possible this does not work well on all platforms so it is disabled by default; in the future it may be enabled by default.

Some object-store backends (e.g, Swift, S3) do not preserve file modification times (modtime). On these backends, rclone stores the original modtime as additional metadata on the object. By default it will make an API call to retrieve the metadata when the modtime is needed by an operation.

Use this flag to disable the extra API call and rely instead on the server's modified time. In cases such as a local to remote sync using --update, knowing the local file is newer than the time it was last uploaded to the remote is sufficient. In those cases, this flag can speed up the process and reduce the number of API calls necessary.

Using this flag on a sync operation without also using --update would cause all files modified at any time other than the last upload time to be uploaded again, which is probably not what you want.

With -v rclone will tell you about each file that is transferred and a small number of significant events.

With -vv rclone will become very verbose telling you about every file it considers and transfers. Please send bug reports with a log with this setting.

When setting verbosity as an environment variable, use RCLONE_VERBOSE=1 or RCLONE_VERBOSE=2 for -v and -vv respectively.

Prints the version number

The outgoing SSL/TLS connections rclone makes can be controlled with these options. For example this can be very useful with the HTTP or WebDAV backends. Rclone HTTP servers have their own set of configuration for SSL/TLS which you can find in their documentation.

This loads the PEM encoded certificate authority certificate and uses it to verify the certificates of the servers rclone connects to.

If you have generated certificates signed with a local CA then you will need this flag to connect to servers using those certificates.

This loads the PEM encoded client side certificate.

This is used for mutual TLS authentication (https://en.wikipedia.org/wiki/Mutual_authentication).

The --client-key flag is required too when using this.

This loads the PEM encoded client side private key used for mutual TLS authentication. Used in conjunction with --client-cert.

--no-check-certificate controls whether a client verifies the server's certificate chain and host name. If --no-check-certificate is true, TLS accepts any certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks.

This option defaults to false.

This should be used only for testing.

Your configuration file contains information for logging in to your cloud services. This means that you should keep your rclone.conf file in a secure location.

If you are in an environment where that isn't possible, you can add a password to your configuration. This means that you will have to supply the password every time you start rclone.

To add a password to your rclone configuration, execute rclone config.

>rclone config
Current remotes:
e) Edit existing remote
n) New remote
d) Delete remote
s) Set configuration password
q) Quit config
e/n/d/s/q>

Go into s, Set configuration password:

e/n/d/s/q> s
Your configuration is not encrypted.
If you add a password, you will protect your login information to cloud services.
a) Add Password
q) Quit to main menu
a/q> a
Enter NEW configuration password:
password:
Confirm NEW password:
password:
Password set
Your configuration is encrypted.
c) Change Password
u) Unencrypt configuration
q) Quit to main menu
c/u/q>

Your configuration is now encrypted, and every time you start rclone you will have to supply the password. See below for details. In the same menu, you can change the password or completely remove encryption from your configuration.

There is no way to recover the configuration if you lose your password.

rclone uses nacl secretbox (https://godoc.org/golang.org/x/crypto/nacl/secretbox) which in turn uses XSalsa20 and Poly1305 to encrypt and authenticate your configuration with secret-key cryptography. The password is SHA-256 hashed, which produces the key for secretbox. The hashed password is not stored.

While this provides very good security, we do not recommend storing your encrypted rclone configuration in public if it contains sensitive information, maybe except if you use a very strong password.

If it is safe in your environment, you can set the RCLONE_CONFIG_PASS environment variable to contain your password, in which case it will be used for decrypting the configuration.

You can set this for a session from a script. For unix like systems save this to a file called set-rclone-password:

#!/bin/echo Source this file don't run it
read -s RCLONE_CONFIG_PASS
export RCLONE_CONFIG_PASS

Then source the file when you want to use it. From the shell you would do source set-rclone-password. It will then ask you for the password and set it in the environment variable.

An alternate means of supplying the password is to provide a script which will retrieve the password and print on standard output. This script should have a fully specified path name and not rely on any environment variables. The script is supplied either via --password-command="..." command line argument or via the RCLONE_PASSWORD_COMMAND environment variable.

One useful example of this is using the passwordstore application to retrieve the password:

export RCLONE_PASSWORD_COMMAND="pass rclone/config"

If the passwordstore password manager holds the password for the rclone configuration, using the script method means the password is primarily protected by the passwordstore system, and is never embedded in the clear in scripts, nor available for examination using the standard commands available. It is quite possible with long running rclone sessions for copies of passwords to be innocently captured in log files or terminal scroll buffers, etc. Using the script method of supplying the password enhances the security of the config password considerably.

If you are running rclone inside a script, unless you are using the --password-command method, you might want to disable password prompts. To do that, pass the parameter --ask-password=false to rclone. This will make rclone fail instead of asking for a password if RCLONE_CONFIG_PASS doesn't contain a valid password, and --password-command has not been supplied.

Whenever running commands that may be affected by options in a configuration file, rclone will look for an existing file according to the rules described above, and load any it finds. If an encrypted file is found, this includes decrypting it, with the possible consequence of a password prompt. When executing a command line that you know are not actually using anything from such a configuration file, you can avoid it being loaded by overriding the location, e.g. with one of the documented special values for memory-only configuration. Since only backend options can be stored in configuration files, this is normally unnecessary for commands that do not operate on backends, e.g. genautocomplete. However, it will be relevant for commands that do operate on backends in general, but are used without referencing a stored remote, e.g. listing local filesystem paths, or connection strings: rclone --config="" ls .

These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name e.g. --drive-test-option - see the docs for the remote in question.

Write CPU profile to file. This can be analysed with go tool pprof.

The --dump flag takes a comma separated list of flags to dump info about.

Note that some headers including Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.

The available flags are:

Dump HTTP headers with Authorization: lines removed. May still contain sensitive info. Can be very verbose. Useful for debugging only.

Use --dump auth if you do want the Authorization: headers.

Dump HTTP headers and bodies - may contain sensitive info. Can be very verbose. Useful for debugging only.

Note that the bodies are buffered in memory so don't use this for enormous files.

Like --dump bodies but dumps the request bodies and the response headers. Useful for debugging download problems.

Like --dump bodies but dumps the response bodies and the request headers. Useful for debugging upload problems.

Dump HTTP headers - will contain sensitive info such as Authorization: headers - use --dump headers to dump without Authorization: headers. Can be very verbose. Useful for debugging only.

Dump the filters to the output. Useful to see exactly what include and exclude options are filtering on.

This dumps a list of the running go-routines at the end of the command to standard output.

This dumps a list of the open files at the end of the command. It uses the lsof command to do that so you'll need that installed to use it.

Write memory profile to file. This can be analysed with go tool pprof.

For the filtering options

--delete-excluded
--filter
--filter-from
--exclude
--exclude-from
--exclude-if-present
--include
--include-from
--files-from
--files-from-raw
--min-size
--max-size
--min-age
--max-age
--dump filters

See the filtering section (https://rclone.org/filtering/).

For the remote control options and for instructions on how to remote control rclone

--rc
and anything starting with --rc-

See the remote control section (https://rclone.org/rc/).

rclone has 4 levels of logging, ERROR, NOTICE, INFO and DEBUG.

By default, rclone logs to standard error. This means you can redirect standard error and still see the normal output of rclone commands (e.g. rclone ls).

By default, rclone will produce Error and Notice level messages.

If you use the -q flag, rclone will only produce Error messages.

If you use the -v flag, rclone will produce Error, Notice and Info messages.

If you use the -vv flag, rclone will produce Error, Notice, Info and Debug messages.

You can also control the log levels with the --log-level flag.

If you use the --log-file=FILE option, rclone will redirect Error, Info and Debug messages along with standard error to FILE.

If you use the --syslog flag then rclone will log to syslog and the --syslog-facility control which facility it uses.

Rclone prefixes all log messages with their level in capitals, e.g. INFO which makes it easy to grep the log file for different kinds of information.

If any errors occur during the command execution, rclone will exit with a non-zero exit code. This allows scripts to detect when rclone operations have failed.

During the startup phase, rclone will exit immediately if an error is detected in the configuration. There will always be a log message immediately before exiting.

When rclone is running it will accumulate errors as it goes along, and only exit with a non-zero exit code if (after retries) there were still failed transfers. For every error counted there will be a high priority log message (visible with -q) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry it will log a high priority message if the retry was successful.

0 - success
1 - Syntax or usage error
2 - Error not otherwise categorised
3 - Directory not found
4 - File not found
5 - Temporary error (one that more retries might fix) (Retry errors)
6 - Less serious errors (like 461 errors from dropbox) (NoRetry errors)
7 - Fatal error (one that more retries won't fix, like account suspended) (Fatal errors)
8 - Transfer exceeded - limit set by --max-transfer reached
9 - Operation successful, but no files transferred

Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.

Every option in rclone can have its default set by environment variable.

To find the name of the environment variable, first, take the long option name, strip the leading --, change - to _, make upper case and prepend RCLONE_.

For example, to always set --stats 5s, set the environment variable RCLONE_STATS=5s. If you set stats on the command line this will override the environment variable setting.

Or to always use the trash in drive --drive-use-trash, set RCLONE_DRIVE_USE_TRASH=true.

Verbosity is slightly different, the environment variable equivalent of --verbose or -v is RCLONE_VERBOSE=1, or for -vv, RCLONE_VERBOSE=2.

The same parser is used for the options and the environment variables so they take exactly the same form.

The options set by environment variables can be seen with the -vv flag, e.g. rclone version -vv.

You can set defaults for values in the config file on an individual remote basis. The names of the config items are documented in the page for each backend.

To find the name of the environment variable, you need to set, take RCLONE_CONFIG_ + name of remote + _ + name of config file option and make it all uppercase.

For example, to configure an S3 remote named mys3: without a config file (using unix ways of setting environment variables):

$ export RCLONE_CONFIG_MYS3_TYPE=s3
$ export RCLONE_CONFIG_MYS3_ACCESS_KEY_ID=XXX
$ export RCLONE_CONFIG_MYS3_SECRET_ACCESS_KEY=XXX
$ rclone lsd mys3:

-1 2016-09-21 12:54:21 -1 my-bucket $ rclone listremotes | grep mys3 mys3:

Note that if you want to create a remote using environment variables you must create the ..._TYPE variable as above.

Note that the name of a remote created using environment variable is case insensitive, in contrast to regular remotes stored in config file as documented above. You must write the name in uppercase in the environment variable, but as seen from example above it will be listed and can be accessed in lowercase, while you can also refer to the same remote in uppercase:

$ rclone lsd mys3:

-1 2016-09-21 12:54:21 -1 my-bucket $ rclone lsd MYS3:
-1 2016-09-21 12:54:21 -1 my-bucket

Note that you can only set the options of the immediate backend, so RCLONE_CONFIG_MYS3CRYPT_ACCESS_KEY_ID has no effect, if myS3Crypt is a crypt remote based on an S3 remote. However RCLONE_S3_ACCESS_KEY_ID will set the access key of all remotes using S3, including myS3Crypt.

Note also that now rclone has connection strings, it is probably easier to use those instead which makes the above example

rclone lsd :s3,access_key_id=XXX,secret_access_key=XXX:

The various different methods of backend configuration are read in this order and the first one with a value is used.

Parameters in connection strings, e.g. myRemote,skip_links:
Flag values as supplied on the command line, e.g. --skip-links
Remote specific environment vars, e.g. RCLONE_CONFIG_MYREMOTE_SKIP_LINKS (see above).
Backend-specific environment vars, e.g. RCLONE_LOCAL_SKIP_LINKS.
Backend generic environment vars, e.g. RCLONE_SKIP_LINKS.
Config file, e.g. skip_links = true.
Default values, e.g. false - these can't be changed.

So if both --skip-links is supplied on the command line and an environment variable RCLONE_LOCAL_SKIP_LINKS is set, the command line flag will take preference.

The backend configurations set by environment variables can be seen with the -vv flag, e.g. rclone about myRemote: -vv.

For non backend configuration the order is as follows:

Flag values as supplied on the command line, e.g. --stats 5s.
Environment vars, e.g. RCLONE_STATS=5s.
Default values, e.g. 1m - these can't be changed.

RCLONE_CONFIG_PASS set to contain your config file password (see Configuration Encryption section)
HTTP_PROXY, HTTPS_PROXY and NO_PROXY (or the lowercase versions thereof).
HTTPS_PROXY takes precedence over HTTP_PROXY for https requests.
The environment values may be either a complete URL or a "host[:port]" for, in which case the "http" scheme is assumed.
USER and LOGNAME values are used as fallbacks for current username. The primary method for looking up username is OS-specific: Windows API on Windows, real user ID in /etc/passwd on Unix systems. In the documentation the current username is simply referred to as $USER.
RCLONE_CONFIG_DIR - rclone sets this variable for use in config files and sub processes to point to the directory holding the config file.

The options set by environment variables can be seen with the -vv and --log-level=DEBUG flags, e.g. rclone version -vv.

Some of the configurations (those involving oauth2) require an Internet connected web browser.

If you are trying to set rclone up on a remote or headless box with no browser available on it (e.g. a NAS or a server in a datacenter) then you will need to use an alternative means of configuration. There are two ways of doing it, described below.

On the headless box run rclone config but answer N to the Use auto config? question.

...
Remote config
Use auto config?

* Say Y if not sure
* Say N if you are working on a remote or headless machine y) Yes (default) n) No y/n> n For this to work, you will need rclone available on a machine that has a web browser available. For more help and alternate methods see: https://rclone.org/remote_setup/ Execute the following on the machine with the web browser (same rclone version recommended):
rclone authorize "amazon cloud drive" Then paste the result below: result>

Then on your main desktop machine

rclone authorize "amazon cloud drive"
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
Paste the following into your remote machine --->
SECRET_TOKEN
<---End paste

Then back to the headless box, paste in the code

result> SECRET_TOKEN
--------------------
[acd12]
client_id = 
client_secret = 
token = SECRET_TOKEN
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d>

Rclone stores all of its config in a single configuration file. This can easily be copied to configure a remote rclone.

So first configure rclone on your desktop machine with

rclone config

to set up the config file.

Find the config file by running rclone config file, for example

$ rclone config file
Configuration file is stored at:
/home/user/.rclone.conf

Now transfer it to the remote box (scp, cut paste, ftp, sftp, etc.) and place it in the correct place (use rclone config file on the remote box to find out where).

Linux and MacOS users can utilize SSH Tunnel to redirect the headless box port 53682 to local machine by using the following command:

ssh -L localhost:53682:localhost:53682 username@remote_server

Then on the headless box run rclone config and answer Y to the Use auto config? question.

...
Remote config
Use auto config?

* Say Y if not sure
* Say N if you are working on a remote or headless machine y) Yes (default) n) No y/n> y

Then copy and paste the auth url http://127.0.0.1:53682/auth?state=xxxxxxxxxxxx to the browser on your local machine, complete the auth and it is done.

Filter flags determine which files rclone sync, move, ls, lsl, md5sum, sha1sum, size, delete, check and similar commands apply to.

They are specified in terms of path/file name patterns; path/file lists; file age and size, or presence of a file in a directory. Bucket based remotes without the concept of directory apply filters to object key, age and size in an analogous way.

Rclone purge does not obey filters.

To test filters without risk of damage to data, apply them to rclone ls, or with the --dry-run and -vv flags.

Rclone filter patterns can only be used in filter command line options, not in the specification of a remote.

E.g. rclone copy "remote:dir*.jpg" /path/to/dir does not have a filter effect. rclone copy remote:dir /path/to/dir --include "*.jpg" does.

Important Avoid mixing any two of --include..., --exclude... or --filter... flags in an rclone command. The results may not be what you expect. Instead use a --filter... flag.

Here is a formal definition of the pattern syntax, examples are below.

Rclone matching rules follow a glob style:

*         matches any sequence of non-separator (/) characters
**        matches any sequence of characters including / separators
?         matches any single non-separator (/) character
[ [ ! ] { character-range } ]

character class (must be non-empty) { pattern-list }
pattern alternatives {{ regexp }}
regular expression to match c matches character c (c != *, **, ?, \, [, {, }) \c matches reserved character c (c = *, **, ?, \, [, {, }) or character class

character-range:

c         matches character c (c != \, -, ])
\c        matches reserved character c (c = \, -, ])
lo - hi   matches character c for lo <= c <= hi

pattern-list:

pattern { , pattern }

comma-separated (without spaces) patterns

character classes (see Go regular expression reference (https://golang.org/pkg/regexp/syntax/)) include:

Named character classes (e.g. [\d], [^\d], [\D], [^\D])
Perl character classes (e.g. \s, \S, \w, \W)
ASCII character classes (e.g. [[:alnum:]], [[:alpha:]], [[:punct:]], [[:xdigit:]])

regexp for advanced users to insert a regular expression - see below for more info:

Any re2 regular expression not containing `}}`

If the filter pattern starts with a / then it only matches at the top level of the directory tree, relative to the root of the remote (not necessarily the root of the drive). If it does not start with / then it is matched starting at the end of the path/file name but it only matches a complete path element - it must match from a / separator or the beginning of the path/file.

file.jpg   - matches "file.jpg"

- matches "directory/file.jpg"
- doesn't match "afile.jpg"
- doesn't match "directory/afile.jpg" /file.jpg - matches "file.jpg" in the root directory of the remote
- doesn't match "afile.jpg"
- doesn't match "directory/file.jpg"

The top level of the remote may not be the top level of the drive.

E.g. for a Microsoft Windows local directory structure

F:
├── bkp
├── data
│   ├── excl
│   │   ├── 123.jpg
│   │   └── 456.jpg
│   ├── incl
│   │   └── document.pdf

To copy the contents of folder data into folder bkp excluding the contents of subfolder exclthe following command treats F:\data and F:\bkp as top level for filtering.

rclone copy F:\data\ F:\bkp\ --exclude=/excl/**

Important Use / in path/file name patterns and not \ even if running on Microsoft Windows.

Simple patterns are case sensitive unless the --ignore-case flag is used.

Without --ignore-case (default)

potato - matches "potato"

- doesn't match "POTATO"

With --ignore-case

potato - matches "potato"

- matches "POTATO"

The syntax of filter patterns is glob style matching (like bash uses) to make things easy for users. However this does not provide absolute control over the matching, so for advanced users rclone also provides a regular expression syntax.

The regular expressions used are as defined in the Go regular expression reference (https://golang.org/pkg/regexp/syntax/). Regular expressions should be enclosed in {{ }}. They will match only the last path segment if the glob doesn't start with / or the whole path name if it does. Note that rclone does not attempt to parse the supplied regular expression, meaning that using any regular expression filter will prevent rclone from using directory filter rules, as it will instead check every path against the supplied regular expression(s).

Here is how the {{regexp}} is transformed into an full regular expression to match the entire path:

{{regexp}}  becomes (^|/)(regexp)$
/{{regexp}} becomes ^(regexp)$

Regexp syntax can be mixed with glob syntax, for example

*.{{jpe?g}} to match file.jpg, file.jpeg but not file.png

You can also use regexp flags - to set case insensitive, for example

*.{{(?i)jpg}} to match file.jpg, file.JPG but not file.png

Be careful with wildcards in regular expressions - you don't want them to match path separators normally. To match any file name starting with start and ending with end write

{{start[^/]*end\.jpg}}

Not

{{start.*end\.jpg}}

Which will match a directory called start with a file called end.jpg in it as the .* will match / characters.

Note that you can use -vv --dump filters to show the filter patterns in regexp format - rclone implements the glob patters by transforming them into regular expressions.

Description Pattern Matches Does not match
Wildcard *.jpg /file.jpg /file.png
/dir/file.jpg /dir/file.png
Rooted /*.jpg /file.jpg /file.png
/file2.jpg /dir/file.jpg
Alternates *.{jpg,png} /file.jpg /file.gif
/dir/file.gif /dir/file.gif
Path Wildcard dir/** /dir/anyfile file.png
/subdir/dir/subsubdir/anyfile /subdir/file.png
Any Char *.t?t /file.txt /file.qxt
/dir/file.tzt /dir/file.png
Range *.[a-z] /file.a /file.0
/dir/file.b /dir/file.1
Escape *.\?\?\? /file.??? /file.abc
/dir/file.??? /dir/file.def
Class *.\d\d\d /file.012 /file.abc
/dir/file.345 /dir/file.def
Regexp *.{{jpe?g}} /file.jpeg /file.png
/dir/file.jpg /dir/file.jpeeg
Rooted Regexp /{{.*\.jpe?g}} /file.jpeg /file.png
/file.jpg /dir/file.jpg

Rclone path/file name filters are made up of one or more of the following flags:

--include
--include-from
--exclude
--exclude-from
--filter
--filter-from

There can be more than one instance of individual flags.

Rclone internally uses a combined list of all the include and exclude rules. The order in which rules are processed can influence the result of the filter.

All flags of the same type are processed together in the order above, regardless of what order the different types of flags are included on the command line.

Multiple instances of the same flag are processed from left to right according to their position in the command line.

To mix up the order of processing includes and excludes use --filter... flags.

Within --include-from, --exclude-from and --filter-from flags rules are processed from top to bottom of the referenced file.

If there is an --include or --include-from flag specified, rclone implies a - ** rule which it adds to the bottom of the internal rule list. Specifying a + rule with a --filter... flag does not imply that rule.

Each path/file name passed through rclone is matched against the combined filter list. At first match to a rule the path/file name is included or excluded and no further filter rules are processed for that path/file.

If rclone does not find a match, after testing against all rules (including the implied rule if appropriate), the path/file name is included.

Any path/file included at that stage is processed by the rclone command.

--files-from and --files-from-raw flags over-ride and cannot be combined with other filter options.

To see the internal combined rule list, in regular expression form, for a command add the --dump filters flag. Running an rclone command with --dump filters and -vv flags lists the internal filter elements and shows how they are applied to each source path/file. There is not currently a means provided to pass regular expression filter options into rclone directly though character class filter rules contain character classes. Go regular expression reference (https://golang.org/pkg/regexp/syntax/)

Rclone commands are applied to path/file names not directories. The entire contents of a directory can be matched to a filter by the pattern directory/* or recursively by directory/**.

Directory filter rules are defined with a closing / separator.

E.g. /directory/subdirectory/ is an rclone directory filter rule.

Rclone commands can use directory filter rules to determine whether they recurse into subdirectories. This potentially optimises access to a remote by avoiding listing unnecessary directories. Whether optimisation is desirable depends on the specific filter rules and source remote content.

If any regular expression filters are in use, then no directory recursion optimisation is possible, as rclone must check every path against the supplied regular expression(s).

Directory recursion optimisation occurs if either:

A source remote does not support the rclone ListR primitive. local, sftp, Microsoft OneDrive and WebDAV do not support ListR. Google Drive and most bucket type storage do. Full list (https://rclone.org/overview/#optional-features)
On other remotes (those that support ListR), if the rclone command is not naturally recursive, and provided it is not run with the --fast-list flag. ls, lsf -R and size are naturally recursive but sync, copy and move are not.
Whenever the --disable ListR flag is applied to an rclone command.

Rclone commands imply directory filter rules from path/file filter rules. To view the directory filter rules rclone has implied for a command specify the --dump filters flag.

E.g. for an include rule

/a/*.jpg

Rclone implies the directory include rule

/a/

Directory filter rules specified in an rclone command can limit the scope of an rclone command but path/file filters still have to be specified.

E.g. rclone ls remote: --include /directory/ will not match any files. Because it is an --include option the --exclude ** rule is implied, and the /directory/ pattern serves only to optimise access to the remote by ignoring everything outside of that directory.

E.g. rclone ls remote: --filter-from filter-list.txt with a file filter-list.txt:

- /dir1/
- /dir2/
+ *.pdf
- **

All files in directories dir1 or dir2 or their subdirectories are completely excluded from the listing. Only files of suffix pdf in the root of remote: or its subdirectories are listed. The - ** rule prevents listing of any path/files not previously matched by the rules above.

Option exclude-if-present creates a directory exclude rule based on the presence of a file in a directory and takes precedence over other rclone directory filter rules.

When using pattern list syntax, if a pattern item contains either / or **, then rclone will not able to imply a directory filter rule from this pattern list.

E.g. for an include rule

{dir1/**,dir2/**}

Rclone will match files below directories dir1 or dir2 only, but will not be able to use this filter to exclude a directory dir3 from being traversed.

Directory recursion optimisation may affect performance, but normally not the result. One exception to this is sync operations with option --create-empty-src-dirs, where any traversed empty directories will be created. With the pattern list example {dir1/**,dir2/**} above, this would create an empty directory dir3 on destination (when it exists on source). Changing the filter to {dir1,dir2}/**, or splitting it into two include rules --include dir1/** --include dir2/**, will match the same files while also filtering directories, with the result that an empty directory dir3 will no longer be created.

--exclude - Exclude files matching pattern

Excludes path/file names from an rclone command based on a single exclude rule.

This flag can be repeated. See above for the order filter flags are processed in.

--exclude should not be used with --include, --include-from, --filter or --filter-from flags.

--exclude has no effect when combined with --files-from or --files-from-raw flags.

E.g. rclone ls remote: --exclude *.bak excludes all .bak files from listing.

E.g. rclone size remote: "--exclude /dir/**" returns the total size of all files on remote: excluding those in root directory dir and sub directories.

E.g. on Microsoft Windows rclone ls remote: --exclude "*\[{JP,KR,HK}\]*" lists the files in remote: with [JP] or [KR] or [HK] in their name. Quotes prevent the shell from interpreting the \ characters.\ characters escape the [ and ] so an rclone filter treats them literally rather than as a character-range. The { and } define an rclone pattern list. For other operating systems single quotes are required ie rclone ls remote: --exclude '*\[{JP,KR,HK}\]*'

--exclude-from - Read exclude patterns from file

Excludes path/file names from an rclone command based on rules in a named file. The file contains a list of remarks and pattern rules.

For an example exclude-file.txt:

# a sample exclude rule file
*.bak
file2.jpg

rclone ls remote: --exclude-from exclude-file.txt lists the files on remote: except those named file2.jpg or with a suffix .bak. That is equivalent to rclone ls remote: --exclude file2.jpg --exclude "*.bak".

This flag can be repeated. See above for the order filter flags are processed in.

The --exclude-from flag is useful where multiple exclude filter rules are applied to an rclone command.

--exclude-from should not be used with --include, --include-from, --filter or --filter-from flags.

--exclude-from has no effect when combined with --files-from or --files-from-raw flags.

--exclude-from followed by - reads filter rules from standard input.

--include - Include files matching pattern

Adds a single include rule based on path/file names to an rclone command.

This flag can be repeated. See above for the order filter flags are processed in.

--include has no effect when combined with --files-from or --files-from-raw flags.

--include implies --exclude ** at the end of an rclone internal filter list. Therefore if you mix --include and --include-from flags with --exclude, --exclude-from, --filter or --filter-from, you must use include rules for all the files you want in the include statement. For more flexibility use the --filter-from flag.

E.g. rclone ls remote: --include "*.{png,jpg}" lists the files on remote: with suffix .png and .jpg. All other files are excluded.

E.g. multiple rclone copy commands can be combined with --include and a pattern-list.

rclone copy /vol1/A remote:A
rclone copy /vol1/B remote:B

is equivalent to:

rclone copy /vol1 remote: --include "{A,B}/**"

E.g. rclone ls remote:/wheat --include "??[^[:punct:]]*" lists the files remote: directory wheat (and subdirectories) whose third character is not punctuation. This example uses an ASCII character class (https://golang.org/pkg/regexp/syntax/).

--include-from - Read include patterns from file

Adds path/file names to an rclone command based on rules in a named file. The file contains a list of remarks and pattern rules.

For an example include-file.txt:

# a sample include rule file
*.jpg
file2.avi

rclone ls remote: --include-from include-file.txt lists the files on remote: with name file2.avi or suffix .jpg. That is equivalent to rclone ls remote: --include file2.avi --include "*.jpg".

This flag can be repeated. See above for the order filter flags are processed in.

The --include-from flag is useful where multiple include filter rules are applied to an rclone command.

--include-from implies --exclude ** at the end of an rclone internal filter list. Therefore if you mix --include and --include-from flags with --exclude, --exclude-from, --filter or --filter-from, you must use include rules for all the files you want in the include statement. For more flexibility use the --filter-from flag.

--exclude-from has no effect when combined with --files-from or --files-from-raw flags.

--exclude-from followed by - reads filter rules from standard input.

--filter - Add a file-filtering rule

Specifies path/file names to an rclone command, based on a single include or exclude rule, in + or - format.

This flag can be repeated. See above for the order filter flags are processed in.

--filter + differs from --include. In the case of --include rclone implies an --exclude * rule which it adds to the bottom of the internal rule list. --filter...+ does not imply that rule.

--filter has no effect when combined with --files-from or --files-from-raw flags.

--filter should not be used with --include, --include-from, --exclude or --exclude-from flags.

E.g. rclone ls remote: --filter "- *.bak" excludes all .bak files from a list of remote:.

--filter-from - Read filtering patterns from a file

Adds path/file names to an rclone command based on rules in a named file. The file contains a list of remarks and pattern rules. Include rules start with + and exclude rules with -. ! clears existing rules. Rules are processed in the order they are defined.

This flag can be repeated. See above for the order filter flags are processed in.

Arrange the order of filter rules with the most restrictive first and work down.

E.g. for filter-file.txt:

# a sample filter rule file
- secret*.jpg
+ *.jpg
+ *.png
+ file2.avi
- /dir/Trash/**
+ /dir/**
# exclude everything else
- *

rclone ls remote: --filter-from filter-file.txt lists the path/files on remote: including all jpg and png files, excluding any matching secret*.jpg and including file2.avi. It also includes everything in the directory dir at the root of remote, except remote:dir/Trash which it excludes. Everything else is excluded.

E.g. for an alternative filter-file.txt:

- secret*.jpg
+ *.jpg
+ *.png
+ file2.avi
- *

Files file1.jpg, file3.png and file2.avi are listed whilst secret17.jpg and files without the suffix .jpgor.png` are excluded.

E.g. for an alternative filter-file.txt:

+ *.jpg
+ *.gif
!
+ 42.doc
- *

Only file 42.doc is listed. Prior rules are cleared by the !.

--files-from - Read list of source-file names

Adds path/files to an rclone command from a list in a named file. Rclone processes the path/file names in the order of the list, and no others.

Other filter flags (--include, --include-from, --exclude, --exclude-from, --filter and --filter-from) are ignored when --files-from is used.

--files-from expects a list of files as its input. Leading or trailing whitespace is stripped from the input lines. Lines starting with # or ; are ignored.

Rclone commands with a --files-from flag traverse the remote, treating the names in --files-from as a set of filters.

If the --no-traverse and --files-from flags are used together an rclone command does not traverse the remote. Instead it addresses each path/file named in the file individually. For each path/file name, that requires typically 1 API call. This can be efficient for a short --files-from list and a remote containing many files.

Rclone commands do not error if any names in the --files-from file are missing from the source remote.

The --files-from flag can be repeated in a single rclone command to read path/file names from more than one file. The files are read from left to right along the command line.

Paths within the --files-from file are interpreted as starting with the root specified in the rclone command. Leading / separators are ignored. See --files-from-raw if you need the input to be processed in a raw manner.

E.g. for a file files-from.txt:

# comment
file1.jpg
subdir/file2.jpg

rclone copy --files-from files-from.txt /home/me/pics remote:pics copies the following, if they exist, and only those files.

/home/me/pics/file1.jpg        → remote:pics/file1.jpg
/home/me/pics/subdir/file2.jpg → remote:pics/subdir/file2.jpg

E.g. to copy the following files referenced by their absolute paths:

/home/user1/42
/home/user1/dir/ford
/home/user2/prefect

First find a common subdirectory - in this case /home and put the remaining files in files-from.txt with or without leading /, e.g.

user1/42
user1/dir/ford
user2/prefect

Then copy these to a remote:

rclone copy --files-from files-from.txt /home remote:backup

The three files are transferred as follows:

/home/user1/42       → remote:backup/user1/important
/home/user1/dir/ford → remote:backup/user1/dir/file
/home/user2/prefect  → remote:backup/user2/stuff

Alternatively if / is chosen as root files-from.txt will be:

/home/user1/42
/home/user1/dir/ford
/home/user2/prefect

The copy command will be:

rclone copy --files-from files-from.txt / remote:backup

Then there will be an extra home directory on the remote:

/home/user1/42       → remote:backup/home/user1/42
/home/user1/dir/ford → remote:backup/home/user1/dir/ford
/home/user2/prefect  → remote:backup/home/user2/prefect

--files-from-raw - Read list of source-file names without any processing

This flag is the same as --files-from except that input is read in a raw manner. Lines with leading / trailing whitespace, and lines starting with ; or # are read without any processing. rclone lsf (https://rclone.org/commands/rclone_lsf/) has a compatible format that can be used to export file lists from remotes for input to --files-from-raw.

--ignore-case - make searches case insensitive

By default, rclone filter patterns are case sensitive. The --ignore-case flag makes all of the filters patterns on the command line case insensitive.

E.g. --include "zaphod.txt" does not match a file Zaphod.txt. With --ignore-case a match is made.

Rclone commands with filter patterns containing shell metacharacters may not as work as expected in your shell and may require quoting.

E.g. linux, OSX (* metacharacter)

--include \*.jpg
--include '*.jpg'
--include='*.jpg'

Microsoft Windows expansion is done by the command, not shell, so --include *.jpg does not require quoting.

If the rclone error Command .... needs .... arguments maximum: you provided .... non flag arguments: is encountered, the cause is commonly spaces within the name of a remote or flag value. The fix then is to quote values containing spaces.

--min-size - Don't transfer any file smaller than this

Controls the minimum size file within the scope of an rclone command. Default units are KiB but abbreviations K, M, G, T or P are valid.

E.g. rclone ls remote: --min-size 50k lists files on remote: of 50 KiB size or larger.

--max-size - Don't transfer any file larger than this

Controls the maximum size file within the scope of an rclone command. Default units are KiB but abbreviations K, M, G, T or P are valid.

E.g. rclone ls remote: --max-size 1G lists files on remote: of 1 GiB size or smaller.

--max-age - Don't transfer any file older than this

Controls the maximum age of files within the scope of an rclone command. Default units are seconds or the following abbreviations are valid:

ms - Milliseconds
s - Seconds
m - Minutes
h - Hours
d - Days
w - Weeks
M - Months
y - Years

--max-age can also be specified as an absolute time in the following formats:

RFC3339 - e.g. 2006-01-02T15:04:05Z or 2006-01-02T15:04:05+07:00
ISO8601 Date and time, local timezone - 2006-01-02T15:04:05
ISO8601 Date and time, local timezone - 2006-01-02 15:04:05
ISO8601 Date - 2006-01-02 (YYYY-MM-DD)

--max-age applies only to files and not to directories.

E.g. rclone ls remote: --max-age 2d lists files on remote: of 2 days old or less.

--min-age - Don't transfer any file younger than this

Controls the minimum age of files within the scope of an rclone command. (see --max-age for valid formats)

--min-age applies only to files and not to directories.

E.g. rclone ls remote: --min-age 2d lists files on remote: of 2 days old or more.

--delete-excluded - Delete files on dest excluded from sync

Important this flag is dangerous to your data - use with --dry-run and -v first.

In conjunction with rclone sync, --delete-excluded deletes any files on the destination which are excluded from the command.

E.g. the scope of rclone sync -i A: B: can be restricted:

rclone --min-size 50k --delete-excluded sync A: B:

All files on B: which are less than 50 KiB are deleted because they are excluded from the rclone sync command.

--dump filters - dump the filters to the output

Dumps the defined filters to standard output in regular expression format.

Useful for debugging.

The --exclude-if-present flag controls whether a directory is within the scope of an rclone command based on the presence of a named file within it. The flag can be repeated to check for multiple file names, presence of any of them will exclude the directory.

This flag has a priority over other filter flags.

E.g. for the following directory structure:

dir1/file1
dir1/dir2/file2
dir1/dir2/dir3/file3
dir1/dir2/dir3/.ignore

The command rclone ls --exclude-if-present .ignore dir1 does not list dir3, file3 or .ignore.

The most frequent filter support issues on the rclone forum (https://forum.rclone.org/) are:

Not using paths relative to the root of the remote
Not using / to match from the root of a remote
Not using ** to match the contents of a directory

Rclone can serve a web based GUI (graphical user interface). This is somewhat experimental at the moment so things may be subject to change.

Run this command in a terminal and rclone will download and then display the GUI in a web browser.

rclone rcd --rc-web-gui

This will produce logs like this and rclone needs to continue to run to serve the GUI:

2019/08/25 11:40:14 NOTICE: A new release for gui is present at https://github.com/rclone/rclone-webui-react/releases/download/v0.0.6/currentbuild.zip
2019/08/25 11:40:14 NOTICE: Downloading webgui binary. Please wait. [Size: 3813937, Path :  /home/USER/.cache/rclone/webgui/v0.0.6.zip]
2019/08/25 11:40:16 NOTICE: Unzipping
2019/08/25 11:40:16 NOTICE: Serving remote control on http://127.0.0.1:5572/

This assumes you are running rclone locally on your machine. It is possible to separate the rclone and the GUI - see below for details.

If you wish to check for updates then you can add --rc-web-gui-update to the command line.

If you find your GUI broken, you may force it to update by add --rc-web-gui-force-update.

By default, rclone will open your browser. Add --rc-web-gui-no-open-browser to disable this feature.

Once the GUI opens, you will be looking at the dashboard which has an overall overview.

On the left hand side you will see a series of view buttons you can click on:

Dashboard - main overview
Configs - examine and create new configurations
Explorer - view, download and upload files to the cloud storage systems
Backend - view or alter the backend config
Log out

(More docs and walkthrough video to come!)

When you run the rclone rcd --rc-web-gui this is what happens

Rclone starts but only runs the remote control API ("rc").
The API is bound to localhost with an auto-generated username and password.
If the API bundle is missing then rclone will download it.
rclone will start serving the files from the API bundle over the same port as the API
rclone will open the browser with a login_token so it can log straight in.

The rclone rcd may use any of the flags documented on the rc page (https://rclone.org/rc/#supported-parameters).

The flag --rc-web-gui is shorthand for

Download the web GUI if necessary
Check we are using some authentication
--rc-user gui
--rc-pass <random password>
--rc-serve

These flags can be overridden as desired.

See also the rclone rcd documentation (https://rclone.org/commands/rclone_rcd/).

For example the GUI could be served on a public port over SSL using an htpasswd file using the following flags:

--rc-web-gui
--rc-addr :443
--rc-htpasswd /path/to/htpasswd
--rc-cert /path/to/ssl.crt
--rc-key /path/to/ssl.key

If you want to run the GUI behind a proxy at /rclone you could use these flags:

--rc-web-gui
--rc-baseurl rclone
--rc-htpasswd /path/to/htpasswd

Or instead of htpasswd if you just want a single user and password:

--rc-user me
--rc-pass mypassword

The GUI is being developed in the: rclone/rclone-webui-react repository (https://github.com/rclone/rclone-webui-react).

Bug reports and contributions are very welcome :-)

If you have questions then please ask them on the rclone forum (https://forum.rclone.org/).

If rclone is run with the --rc flag then it starts an HTTP server which can be used to remote control rclone using its API.

You can either use the rc command to access the API or use HTTP directly.

If you just want to run a remote control then see the rcd (https://rclone.org/commands/rclone_rcd/) command.

Flag to start the http server listen on remote requests

IPaddress:Port or :Port to bind server to. (default "localhost:5572")

SSL PEM key (concatenation of certificate and CA certificate)

Client certificate authority to verify clients with

htpasswd file - if not provided no authentication is done

SSL PEM Private key

Maximum size of request header (default 4096)

User name for authentication.

Password for authentication.

Realm for authentication (default "rclone")

Timeout for server reading data (default 1h0m0s)

Timeout for server writing data (default 1h0m0s)

Enable the serving of remote objects via the HTTP interface. This means objects will be accessible at http://127.0.0.1:5572/ by default, so you can browse to http://127.0.0.1:5572/ or http://127.0.0.1:5572/* to see a listing of the remotes. Objects may be requested from remotes using this syntax http://127.0.0.1:5572/[remote:path]/path/to/object

Default Off.

Path to local files to serve on the HTTP server.

If this is set then rclone will serve the files in that directory. It will also open the root in the web browser if specified. This is for implementing browser based GUIs for rclone functions.

If --rc-user or --rc-pass is set then the URL that is opened will have the authorization in the URL in the http://user:pass@localhost/ style.

Default Off.

Enable OpenMetrics/Prometheus compatible endpoint at /metrics.

Default Off.

Set this flag to serve the default web gui on the same port as rclone.

Default Off.

Set the allowed Access-Control-Allow-Origin for rc requests.

Can be used with --rc-web-gui if the rclone is running on different IP than the web-gui.

Default is IP address on which rc is running.

Set the URL to fetch the rclone-web-gui files from.

Default https://api.github.com/repos/rclone/rclone-webui-react/releases/latest.

Set this flag to check and update rclone-webui-react from the rc-web-fetch-url.

Default Off.

Set this flag to force update rclone-webui-react from the rc-web-fetch-url.

Default Off.

Set this flag to disable opening browser automatically when using web-gui.

Default Off.

Expire finished async jobs older than DURATION (default 60s).

Interval duration to check for expired async jobs (default 10s).

By default rclone will require authorisation to have been set up on the rc interface in order to use any methods which access any rclone remotes. Eg operations/list is denied as it involved creating a remote as is sync/copy.

If this is set then no authorisation will be required on the server to use these methods. The alternative is to use --rc-user and --rc-pass and use these credentials in the request.

Default Off.

Prefix for URLs.

Default is root

User-specified template.

Rclone itself implements the remote control protocol in its rclone rc command.

You can use it like this

$ rclone rc rc/noop param1=one param2=two
{

"param1": "one",
"param2": "two" }

Run rclone rc on its own to see the help for the installed remote control commands.

rclone rc also supports a --json flag which can be used to send more complicated input parameters.

$ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 } }' rc/noop
{

"p1": [
1,
"2",
null,
4
],
"p2": {
"a": 1,
"b": 2
} }

If the parameter being passed is an object then it can be passed as a JSON string rather than using the --json flag which simplifies the command line.

rclone rc operations/list fs=/tmp remote=test opt='{"showHash": true}'

Rather than

rclone rc operations/list --json '{"fs": "/tmp", "remote": "test", "opt": {"showHash": true}}'

The rc interface supports some special parameters which apply to all commands. These start with _ to show they are different.

Each rc call is classified as a job and it is assigned its own id. By default jobs are executed immediately as they are created or synchronously.

If _async has a true value when supplied to an rc call then it will return immediately with a job id and the task will be run in the background. The job/status call can be used to get information of the background job. The job can be queried for up to 1 minute after it has finished.

It is recommended that potentially long running jobs, e.g. sync/sync, sync/copy, sync/move, operations/purge are run with the _async flag to avoid any potential problems with the HTTP request and response timing out.

Starting a job with the _async flag:

$ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 }, "_async": true }' rc/noop
{

"jobid": 2 }

Query the status to see if the job has finished. For more information on the meaning of these return parameters see the job/status call.

$ rclone rc --json '{ "jobid":2 }' job/status
{

"duration": 0.000124163,
"endTime": "2018-10-27T11:38:07.911245881+01:00",
"error": "",
"finished": true,
"id": 2,
"output": {
"_async": true,
"p1": [
1,
"2",
null,
4
],
"p2": {
"a": 1,
"b": 2
}
},
"startTime": "2018-10-27T11:38:07.911121728+01:00",
"success": true }

job/list can be used to show the running or recently completed jobs

$ rclone rc job/list
{

"jobids": [
2
] }

If you wish to set config (the equivalent of the global flags) for the duration of an rc call only then pass in the _config parameter.

This should be in the same format as the config key returned by options/get.

For example, if you wished to run a sync with the --checksum parameter, you would pass this parameter in your JSON blob.

"_config":{"CheckSum": true}

If using rclone rc this could be passed as

rclone rc operations/sync ... _config='{"CheckSum": true}'

Any config parameters you don't set will inherit the global defaults which were set with command line flags or environment variables.

Note that it is possible to set some values as strings or integers - see data types for more info. Here is an example setting the equivalent of --buffer-size in string or integer format.

"_config":{"BufferSize": "42M"}
"_config":{"BufferSize": 44040192}

If you wish to check the _config assignment has worked properly then calling options/local will show what the value got set to.

If you wish to set filters for the duration of an rc call only then pass in the _filter parameter.

This should be in the same format as the filter key returned by options/get.

For example, if you wished to run a sync with these flags

--max-size 1M --max-age 42s --include "a" --include "b"

you would pass this parameter in your JSON blob.

"_filter":{"MaxSize":"1M", "IncludeRule":["a","b"], "MaxAge":"42s"}

If using rclone rc this could be passed as

rclone rc ... _filter='{"MaxSize":"1M", "IncludeRule":["a","b"], "MaxAge":"42s"}'

Any filter parameters you don't set will inherit the global defaults which were set with command line flags or environment variables.

Note that it is possible to set some values as strings or integers - see data types for more info. Here is an example setting the equivalent of --buffer-size in string or integer format.

"_filter":{"MinSize": "42M"}
"_filter":{"MinSize": 44040192}

If you wish to check the _filter assignment has worked properly then calling options/local will show what the value got set to.

Each rc call has its own stats group for tracking its metrics. By default grouping is done by the composite group name from prefix job/ and id of the job like so job/1.

If _group has a value then stats for that request will be grouped under that value. This allows caller to group stats under their own name.

Stats for specific group can be accessed by passing group to core/stats:

$ rclone rc --json '{ "group": "job/1" }' core/stats
{

"speed": 12345
... }

When the API returns types, these will mostly be straight forward integer, string or boolean types.

However some of the types returned by the options/get call and taken by the options/set calls as well as the vfsOpt, mountOpt and the _config parameters.

Duration - these are returned as an integer duration in nanoseconds. They may be set as an integer, or they may be set with time string, eg "5s". See the options section (https://rclone.org/docs/#options) for more info.
Size - these are returned as an integer number of bytes. They may be set as an integer or they may be set with a size suffix string, eg "10M". See the options section (https://rclone.org/docs/#options) for more info.
Enumerated type (such as CutoffMode, DumpFlags, LogLevel, VfsCacheMode - these will be returned as an integer and may be set as an integer but more conveniently they can be set as a string, eg "HARD" for CutoffMode or DEBUG for LogLevel.
BandwidthSpec - this will be set and returned as a string, eg "1M".

Remotes are specified with the fs=, srcFs=, dstFs= parameters depending on the command being used.

The parameters can be a string as per the rest of rclone, eg s3:bucket/path or :sftp:/my/dir. They can also be specified as JSON blobs.

If specifyng a JSON blob it should be a object mapping strings to strings. These values will be used to configure the remote. There are 3 special values which may be set:

type - set to type to specify a remote called :type:
_name - set to name to specify a remote called name:
_root - sets the root of the remote - may be empty

One of _name or type should normally be set. If the local backend is desired then type should be set to local. If _root isn't specified then it defaults to the root of the remote.

For example this JSON is equivalent to remote:/tmp

{

"_name": "remote",
"_path": "/tmp" }

And this is equivalent to :sftp,host='example.com':/tmp

{

"type": "sftp",
"host": "example.com",
"_path": "/tmp" }

And this is equivalent to /tmp/dir

{

type = "local",
_ path = "/tmp/dir" }

This takes the following parameters:

command - a string with the command name
fs - a remote name string e.g. "drive:"
arg - a list of arguments for the backend command
opt - a map of string to string of options

Returns:

result - result from the backend command

Example:

rclone rc backend/command command=noop fs=. -o echo=yes -o blue -a path1 -a path2

Returns

{

"result": {
"arg": [
"path1",
"path2"
],
"name": "noop",
"opt": {
"blue": "",
"echo": "yes"
}
} }

Note that this is the direct equivalent of using this "backend" command:

rclone backend noop . -o echo=yes -o blue path1 path2

Note that arguments must be preceded by the "-a" flag

See the backend (https://rclone.org/commands/rclone_backend/) command for more information.

Authentication is required for this call.

Purge a remote from the cache backend. Supports either a directory or a file. Params: - remote = path to remote (required) - withData = true/false to delete cached data (chunks) as well (optional)

Eg

rclone rc cache/expire remote=path/to/sub/folder/
rclone rc cache/expire remote=/ withData=true

Ensure the specified file chunks are cached on disk.

The chunks= parameter specifies the file chunks to check. It takes a comma separated list of array slice indices. The slice indices are similar to Python slices: start[:end]

start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning of the file to fetch exclusive. Both values can be negative, in which case they count from the back of the file. The value "-5:" represents the last 5 chunks of a file.

Some valid examples are: ":5,-5:" -> the first and last five chunks "0,-2" -> the first and the second last chunk "0:10" -> the first ten chunks

Any parameter with a key that starts with "file" can be used to specify files to fetch, e.g.

rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye

File names will automatically be encrypted when the a crypt remote is used on top of the cache.

Show statistics for the cache remote.

This takes the following parameters:

name - name of remote
parameters - a map of { "key": "value" } pairs
type - type of the new remote
opt - a dictionary of options to control the configuration
obscure - declare passwords are plain and need obscuring
noObscure - declare passwords are already obscured and don't need obscuring
nonInteractive - don't interact with a user, return questions
continue - continue the config process with an answer
all - ask all the config questions not just the post config ones
state - state to restart with - used with continue
result - result to restart with - used with continue

See the config create (https://rclone.org/commands/rclone_config_create/) command for more information on the above.

Authentication is required for this call.

Parameters:

name - name of remote to delete

See the config delete (https://rclone.org/commands/rclone_config_delete/) command for more information on the above.

Authentication is required for this call.

Returns a JSON object: - key: value

Where keys are remote names and values are the config parameters.

See the config dump (https://rclone.org/commands/rclone_config_dump/) command for more information on the above.

Authentication is required for this call.

Parameters:

name - name of remote to get

See the config dump (https://rclone.org/commands/rclone_config_dump/) command for more information on the above.

Authentication is required for this call.

Returns - remotes - array of remote names

See the listremotes (https://rclone.org/commands/rclone_listremotes/) command for more information on the above.

Authentication is required for this call.

This takes the following parameters:

name - name of remote
parameters - a map of { "key": "value" } pairs

See the config password (https://rclone.org/commands/rclone_config_password/) command for more information on the above.

Authentication is required for this call.

Returns a JSON object: - providers - array of objects

See the config providers (https://rclone.org/commands/rclone_config_providers/) command for more information on the above.

Authentication is required for this call.

This takes the following parameters:

name - name of remote
parameters - a map of { "key": "value" } pairs
opt - a dictionary of options to control the configuration
obscure - declare passwords are plain and need obscuring
noObscure - declare passwords are already obscured and don't need obscuring
nonInteractive - don't interact with a user, return questions
continue - continue the config process with an answer
all - ask all the config questions not just the post config ones
state - state to restart with - used with continue
result - result to restart with - used with continue

See the config update (https://rclone.org/commands/rclone_config_update/) command for more information on the above.

Authentication is required for this call.

This sets the bandwidth limit to the string passed in. This should be a single bandwidth limit entry or a pair of upload:download bandwidth.

Eg

rclone rc core/bwlimit rate=off
{

"bytesPerSecond": -1,
"bytesPerSecondTx": -1,
"bytesPerSecondRx": -1,
"rate": "off" } rclone rc core/bwlimit rate=1M {
"bytesPerSecond": 1048576,
"bytesPerSecondTx": 1048576,
"bytesPerSecondRx": 1048576,
"rate": "1M" } rclone rc core/bwlimit rate=1M:100k {
"bytesPerSecond": 1048576,
"bytesPerSecondTx": 1048576,
"bytesPerSecondRx": 131072,
"rate": "1M" }

If the rate parameter is not supplied then the bandwidth is queried

rclone rc core/bwlimit
{

"bytesPerSecond": 1048576,
"bytesPerSecondTx": 1048576,
"bytesPerSecondRx": 1048576,
"rate": "1M" }

The format of the parameter is exactly the same as passed to --bwlimit except only one bandwidth may be specified.

In either case "rate" is returned as a human-readable string, and "bytesPerSecond" is returned as a number.

This takes the following parameters:

command - a string with the command name.
arg - a list of arguments for the backend command.
opt - a map of string to string of options.
returnType - one of ("COMBINED_OUTPUT", "STREAM", "STREAM_ONLY_STDOUT", "STREAM_ONLY_STDERR").
Defaults to "COMBINED_OUTPUT" if not set.
The STREAM returnTypes will write the output to the body of the HTTP message.
The COMBINED_OUTPUT will write the output to the "result" parameter.

Returns:

result - result from the backend command.
Only set when using returnType "COMBINED_OUTPUT".
error - set if rclone exits with an error code.
returnType - one of ("COMBINED_OUTPUT", "STREAM", "STREAM_ONLY_STDOUT", "STREAM_ONLY_STDERR").

Example:

rclone rc core/command command=ls -a mydrive:/ -o max-depth=1
rclone rc core/command -a ls -a mydrive:/ -o max-depth=1

Returns:

{

"error": false,
"result": "<Raw command line output>" } OR {
"error": true,
"result": "<Raw command line output>" }

Authentication is required for this call.

This tells the go runtime to do a garbage collection run. It isn't necessary to call this normally, but it can be useful for debugging memory problems.

This returns list of stats groups currently in memory.

Returns the following values:

{

"groups": an array of group names:
[
"group1",
"group2",
...
] }

This returns the memory statistics of the running program. What the values mean are explained in the go docs: https://golang.org/pkg/runtime/#MemStats

The most interesting values for most people are:

HeapAlloc - this is the amount of memory rclone is actually using
HeapSys - this is the amount of memory rclone has obtained from the OS
Sys - this is the total amount of memory requested from the OS
It is virtual memory so may include unused memory

Pass a clear string and rclone will obscure it for the config file: - clear - string

Returns: - obscured - string

This returns PID of current process. Useful for stopping rclone process.

(Optional) Pass an exit code to be used for terminating the app: - exitCode - int

This returns all available stats:

rclone rc core/stats

If group is not provided then summed up stats for all groups will be returned.

Parameters

group - name of the stats group (string)

Returns the following values:

{

"bytes": total transferred bytes since the start of the group,
"checks": number of files checked,
"deletes" : number of files deleted,
"elapsedTime": time in floating point seconds since rclone was started,
"errors": number of errors,
"eta": estimated time in seconds until the group completes,
"fatalError": boolean whether there has been at least one fatal error,
"lastError": last error string,
"renames" : number of files renamed,
"retryError": boolean showing whether there has been at least one non-NoRetryError,
"speed": average speed in bytes per second since start of the group,
"totalBytes": total number of bytes in the group,
"totalChecks": total number of checks in the group,
"totalTransfers": total number of transfers in the group,
"transferTime" : total time spent on running jobs,
"transfers": number of transferred files,
"transferring": an array of currently active file transfers:
[
{
"bytes": total transferred bytes for this file,
"eta": estimated time in seconds until file transfer completion
"name": name of the file,
"percentage": progress of the file transfer in percent,
"speed": average speed over the whole transfer in bytes per second,
"speedAvg": current speed in bytes per second as an exponentially weighted moving average,
"size": size of the file in bytes
}
],
"checking": an array of names of currently active file checks
[] }

Values for "transferring", "checking" and "lastError" are only assigned if data is available. The value for "eta" is null if an eta cannot be determined.

This deletes entire stats group.

Parameters

group - name of the stats group (string)

This clears counters, errors and finished transfers for all stats or specific stats group if group is provided.

Parameters

group - name of the stats group (string)

This returns stats about completed transfers:

rclone rc core/transferred

If group is not provided then completed transfers for all groups will be returned.

Note only the last 100 completed transfers are returned.

Parameters

group - name of the stats group (string)

Returns the following values:

{

"transferred": an array of completed transfers (including failed ones):
[
{
"name": name of the file,
"size": size of the file in bytes,
"bytes": total transferred bytes for this file,
"checked": if the transfer is only checked (skipped, deleted),
"timestamp": integer representing millisecond unix epoch,
"error": string description of the error (empty if successful),
"jobid": id of the job that this transfer belongs to
}
] }

This shows the current version of go and the go runtime:

version - rclone version, e.g. "v1.53.0"
decomposed - version number as [major, minor, patch]
isGit - boolean - true if this was compiled from the git version
isBeta - boolean - true if this is a beta version
os - OS in use as according to Go
arch - cpu architecture in use according to Go
goVersion - version of Go runtime in use
linking - type of rclone executable (static or dynamic)
goTags - space separated build tags or "none"

SetBlockProfileRate controls the fraction of goroutine blocking events that are reported in the blocking profile. The profiler aims to sample an average of one blocking event per rate nanoseconds spent blocked.

To include every blocking event in the profile, pass rate = 1. To turn off profiling entirely, pass rate <= 0.

After calling this you can use this to see the blocking profile:

go tool pprof http://localhost:5572/debug/pprof/block

Parameters:

rate - int

SetMutexProfileFraction controls the fraction of mutex contention events that are reported in the mutex profile. On average 1/rate events are reported. The previous rate is returned.

To turn off profiling entirely, pass rate 0. To just read the current rate, pass rate < 0. (For n>1 the details of sampling may change.)

Once this is set you can look use this to profile the mutex contention:

go tool pprof http://localhost:5572/debug/pprof/mutex

Parameters:

rate - int

Results:

previousRate - int

This clears the fs cache. This is where remotes created from backends are cached for a short while to make repeated rc calls more efficient.

If you change the parameters of a backend then you may want to call this to clear an existing remote out of the cache before re-creating it.

Authentication is required for this call.

This returns the number of entries in the fs cache.

Returns - entries - number of items in the cache

Authentication is required for this call.

Parameters: None.

Results:

jobids - array of integer job ids.

Parameters:

jobid - id of the job (integer).

Results:

finished - boolean
duration - time in seconds that the job ran for
endTime - time the job finished (e.g. "2018-10-26T18:50:20.528746884+01:00")
error - error from the job or empty string for no error
finished - boolean whether the job has finished or not
id - as passed in above
startTime - time the job started (e.g. "2018-10-26T18:50:20.528336039+01:00")
success - boolean - true for success false otherwise
output - output of the job as would have been returned if called synchronously
progress - output of the progress related to the underlying job

Parameters:

jobid - id of the job (integer).

This shows currently mounted points, which can be used for performing an unmount.

This takes no parameters and returns

mountPoints: list of current mount points

Eg

rclone rc mount/listmounts

Authentication is required for this call.

rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.

If no mountType is provided, the priority is given as follows: 1. mount 2.cmount 3.mount2

This takes the following parameters:

fs - a remote path to be mounted (required)
mountPoint: valid path on the local machine (required)
mountType: one of the values (mount, cmount, mount2) specifies the mount implementation to use
mountOpt: a JSON object with Mount options in.
vfsOpt: a JSON object with VFS options in.

Example:

rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint
rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mountType=mount
rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}'

The vfsOpt are as described in options/get and can be seen in the the "vfs" section when running and the mountOpt can be seen in the "mount" section:

rclone rc options/get

Authentication is required for this call.

This shows all possible mount types and returns them as a list.

This takes no parameters and returns

mountTypes: list of mount types

The mount types are strings like "mount", "mount2", "cmount" and can be passed to mount/mount as the mountType parameter.

Eg

rclone rc mount/types

Authentication is required for this call.

rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.

This takes the following parameters:

mountPoint: valid path on the local machine where the mount was created (required)

Example:

rclone rc mount/unmount mountPoint=/home/<user>/mountPoint

Authentication is required for this call.

This shows currently mounted points, which can be used for performing an unmount.

This takes no parameters and returns error if unmount does not succeed.

Eg

rclone rc mount/unmountall

Authentication is required for this call.

This takes the following parameters:

fs - a remote name string e.g. "drive:"

The result is as returned from rclone about --json

See the about (https://rclone.org/commands/rclone_about/) command for more information on the above.

Authentication is required for this call.

This takes the following parameters:

fs - a remote name string e.g. "drive:"

See the cleanup (https://rclone.org/commands/rclone_cleanup/) command for more information on the above.

Authentication is required for this call.

This takes the following parameters:

srcFs - a remote name string e.g. "drive:" for the source
srcRemote - a path within that remote e.g. "file.txt" for the source
dstFs - a remote name string e.g. "drive2:" for the destination
dstRemote - a path within that remote e.g. "file2.txt" for the destination

Authentication is required for this call.

This takes the following parameters:

fs - a remote name string e.g. "drive:"
remote - a path within that remote e.g. "dir"
url - string, URL to read from
autoFilename - boolean, set to true to retrieve destination file name from url

See the copyurl (https://rclone.org/commands/rclone_copyurl/) command for more information on the above.

Authentication is required for this call.

This takes the following parameters:

fs - a remote name string e.g. "drive:"

See the delete (https://rclone.org/commands/rclone_delete/) command for more information on the above.

Authentication is required for this call.

This takes the following parameters:

fs - a remote name string e.g. "drive:"
remote - a path within that remote e.g. "dir"

See the deletefile (https://rclone.org/commands/rclone_deletefile/) command for more information on the above.

Authentication is required for this call.

This takes the following parameters:

fs - a remote name string e.g. "drive:"

This returns info about the remote passed in;

{

// optional features and whether they are available or not
"Features": {
"About": true,
"BucketBased": false,
"BucketBasedRootOK": false,
"CanHaveEmptyDirectories": true,
"CaseInsensitive": false,
"ChangeNotify": false,
"CleanUp": false,
"Command": true,
"Copy": false,
"DirCacheFlush": false,
"DirMove": true,
"Disconnect": false,
"DuplicateFiles": false,
"GetTier": false,
"IsLocal": true,
"ListR": false,
"MergeDirs": false,
"MetadataInfo": true,
"Move": true,
"OpenWriterAt": true,
"PublicLink": false,
"Purge": true,
"PutStream": true,
"PutUnchecked": false,
"ReadMetadata": true,
"ReadMimeType": false,
"ServerSideAcrossConfigs": false,
"SetTier": false,
"SetWrapper": false,
"Shutdown": false,
"SlowHash": true,
"SlowModTime": false,
"UnWrap": false,
"UserInfo": false,
"UserMetadata": true,
"WrapFs": false,
"WriteMetadata": true,
"WriteMimeType": false
},
// Names of hashes available
"Hashes": [
"md5",
"sha1",
"whirlpool",
"crc32",
"sha256",
"dropbox",
"mailru",
"quickxor"
],
"Name": "local", // Name as created
"Precision": 1, // Precision of timestamps in ns
"Root": "/", // Path as created
"String": "Local file system at /", // how the remote will appear in logs
// Information about the system metadata for this backend
"MetadataInfo": {
"System": {
"atime": {
"Help": "Time of last access",
"Type": "RFC 3339",
"Example": "2006-01-02T15:04:05.999999999Z07:00"
},
"btime": {
"Help": "Time of file birth (creation)",
"Type": "RFC 3339",
"Example": "2006-01-02T15:04:05.999999999Z07:00"
},
"gid": {
"Help": "Group ID of owner",
"Type": "decimal number",
"Example": "500"
},
"mode": {
"Help": "File type and mode",
"Type": "octal, unix style",
"Example": "0100664"
},
"mtime": {
"Help": "Time of last modification",
"Type": "RFC 3339",
"Example": "2006-01-02T15:04:05.999999999Z07:00"
},
"rdev": {
"Help": "Device ID (if special file)",
"Type": "hexadecimal",
"Example": "1abc"
},
"uid": {
"Help": "User ID of owner",
"Type": "decimal number",
"Example": "500"
}
},
"Help": "Textual help string\n"
} }

This command does not have a command line equivalent so use this instead:

rclone rc --loopback operations/fsinfo fs=remote:

This takes the following parameters:

fs - a remote name string e.g. "drive:"
remote - a path within that remote e.g. "dir"
opt - a dictionary of options to control the listing (optional)
recurse - If set recurse directories
noModTime - If set return modification time
showEncrypted - If set show decrypted names
showOrigIDs - If set show the IDs for each item if known
showHash - If set return a dictionary of hashes
noMimeType - If set don't show mime types
dirsOnly - If set only show directories
filesOnly - If set only show files
metadata - If set return metadata of objects also
hashTypes - array of strings of hash types to show if showHash set

Returns:

list
This is an array of objects as described in the lsjson command

See the lsjson (https://rclone.org/commands/rclone_lsjson/) command for more information on the above and examples.

Authentication is required for this call.

This takes the following parameters:

fs - a remote name string e.g. "drive:"
remote - a path within that remote e.g. "dir"

See the mkdir (https://rclone.org/commands/rclone_mkdir/) command for more information on the above.

Authentication is required for this call.

This takes the following parameters:

srcFs - a remote name string e.g. "drive:" for the source
srcRemote - a path within that remote e.g. "file.txt" for the source
dstFs - a remote name string e.g. "drive2:" for the destination
dstRemote - a path within that remote e.g. "file2.txt" for the destination

Authentication is required for this call.

This takes the following parameters:

fs - a remote name string e.g. "drive:"
remote - a path within that remote e.g. "dir"
unlink - boolean - if set removes the link rather than adding it (optional)
expire - string - the expiry time of the link e.g. "1d" (optional)

Returns:

url - URL of the resource

See the link (https://rclone.org/commands/rclone_link/) command for more information on the above.

Authentication is required for this call.

This takes the following parameters:

fs - a remote name string e.g. "drive:"
remote - a path within that remote e.g. "dir"

See the purge (https://rclone.org/commands/rclone_purge/) command for more information on the above.

Authentication is required for this call.

This takes the following parameters:

fs - a remote name string e.g. "drive:"
remote - a path within that remote e.g. "dir"

See the rmdir (https://rclone.org/commands/rclone_rmdir/) command for more information on the above.

Authentication is required for this call.

This takes the following parameters:

fs - a remote name string e.g. "drive:"
remote - a path within that remote e.g. "dir"
leaveRoot - boolean, set to true not to delete the root

See the rmdirs (https://rclone.org/commands/rclone_rmdirs/) command for more information on the above.

Authentication is required for this call.

This takes the following parameters:

fs - a remote name string e.g. "drive:path/to/dir"

Returns:

count - number of files
bytes - number of bytes in those files

See the size (https://rclone.org/commands/rclone_size/) command for more information on the above.

Authentication is required for this call.

This takes the following parameters

fs - a remote name string eg "drive:"
remote - a path within that remote eg "dir"
opt - a dictionary of options to control the listing (optional)
see operations/list for the options

The result is

item - an object as described in the lsjson command. Will be null if not found.

Note that if you are only interested in files then it is much more efficient to set the filesOnly flag in the options.

See the lsjson (https://rclone.org/commands/rclone_lsjson/) command for more information on the above and examples.

Authentication is required for this call.

This takes the following parameters:

fs - a remote name string e.g. "drive:"
remote - a path within that remote e.g. "dir"
each part in body represents a file to be uploaded

See the uploadfile (https://rclone.org/commands/rclone_uploadfile/) command for more information on the above.

Authentication is required for this call.

Returns: - options - a list of the options block names

Returns an object where keys are option block names and values are an object with the current option values in.

Note that these are the global options which are unaffected by use of the _config and _filter parameters. If you wish to read the parameters set in _config then use options/config and for _filter use options/filter.

This shows the internal names of the option within rclone which should map to the external options very easily with a few exceptions.

Returns an object with the keys "config" and "filter". The "config" key contains the local config and the "filter" key contains the local filters.

Note that these are the local options specific to this rc call. If _config was not supplied then they will be the global options. Likewise with "_filter".

This call is mostly useful for seeing if _config and _filter passing is working.

This shows the internal names of the option within rclone which should map to the external options very easily with a few exceptions.

Parameters:

option block name containing an object with
key: value

Repeated as often as required.

Only supply the options you wish to change. If an option is unknown it will be silently ignored. Not all options will have an effect when changed like this.

For example:

This sets DEBUG level logs (-vv) (these can be set by number or string)

rclone rc options/set --json '{"main": {"LogLevel": "DEBUG"}}'
rclone rc options/set --json '{"main": {"LogLevel": 8}}'

And this sets INFO level logs (-v)

rclone rc options/set --json '{"main": {"LogLevel": "INFO"}}'

And this sets NOTICE level logs (normal without -v)

rclone rc options/set --json '{"main": {"LogLevel": "NOTICE"}}'

Used for adding a plugin to the webgui.

This takes the following parameters:

url - http url of the github repo where the plugin is hosted (http://github.com/rclone/rclone-webui-react).

Example:

rclone rc pluginsctl/addPlugin

Authentication is required for this call.

This shows all possible plugins by a mime type.

This takes the following parameters:

type - supported mime type by a loaded plugin e.g. (video/mp4, audio/mp3).
pluginType - filter plugins based on their type e.g. (DASHBOARD, FILE_HANDLER, TERMINAL).

Returns:

loadedPlugins - list of current production plugins.
testPlugins - list of temporarily loaded development plugins, usually running on a different server.

Example:

rclone rc pluginsctl/getPluginsForType type=video/mp4

Authentication is required for this call.

This allows you to get the currently enabled plugins and their details.

This takes no parameters and returns:

loadedPlugins - list of current production plugins.
testPlugins - list of temporarily loaded development plugins, usually running on a different server.

E.g.

rclone rc pluginsctl/listPlugins

Authentication is required for this call.

Allows listing of test plugins with the rclone.test set to true in package.json of the plugin.

This takes no parameters and returns:

loadedTestPlugins - list of currently available test plugins.

E.g.

rclone rc pluginsctl/listTestPlugins

Authentication is required for this call.

This allows you to remove a plugin using it's name.

This takes parameters:

name - name of the plugin in the format author/plugin_name.

E.g.

rclone rc pluginsctl/removePlugin name=rclone/video-plugin

Authentication is required for this call.

This allows you to remove a plugin using it's name.

This takes the following parameters:

name - name of the plugin in the format author/plugin_name.

Example:

rclone rc pluginsctl/removeTestPlugin name=rclone/rclone-webui-react

Authentication is required for this call.

This returns an error with the input as part of its error string. Useful for testing error handling.

This lists all the registered remote control commands as a JSON map in the commands response.

This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly.

This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly.

Authentication is required for this call.

This takes the following parameters

path1 - a remote directory string e.g. drive:path1
path2 - a remote directory string e.g. drive:path2
dryRun - dry-run mode
resync - performs the resync run
checkAccess - abort if RCLONE_TEST files are not found on both filesystems
checkFilename - file name for checkAccess (default: RCLONE_TEST)
maxDelete - abort sync if percentage of deleted files is above this threshold (default: 50)
force - maxDelete safety check and run the sync
checkSync - true by default, false disables comparison of final listings, only will skip sync, only compare listings from the last run
removeEmptyDirs - remove empty directories at the final cleanup step
filtersFile - read filtering patterns from a file
workdir - server directory for history files (default: /home/ncw/.cache/rclone/bisync)
noCleanup - retain working files

See bisync command help (https://rclone.org/commands/rclone_bisync/) and full bisync description (https://rclone.org/bisync/) for more information.

Authentication is required for this call.

This takes the following parameters:

srcFs - a remote name string e.g. "drive:src" for the source
dstFs - a remote name string e.g. "drive:dst" for the destination
createEmptySrcDirs - create empty src directories on destination if set

See the copy (https://rclone.org/commands/rclone_copy/) command for more information on the above.

Authentication is required for this call.

This takes the following parameters:

srcFs - a remote name string e.g. "drive:src" for the source
dstFs - a remote name string e.g. "drive:dst" for the destination
createEmptySrcDirs - create empty src directories on destination if set
deleteEmptySrcDirs - delete empty src directories if set

See the move (https://rclone.org/commands/rclone_move/) command for more information on the above.

Authentication is required for this call.

This takes the following parameters:

srcFs - a remote name string e.g. "drive:src" for the source
dstFs - a remote name string e.g. "drive:dst" for the destination
createEmptySrcDirs - create empty src directories on destination if set

See the sync (https://rclone.org/commands/rclone_sync/) command for more information on the above.

Authentication is required for this call.

This forgets the paths in the directory cache causing them to be re-read from the remote when needed.

If no paths are passed in then it will forget all the paths in the directory cache.

rclone rc vfs/forget

Otherwise pass files or dirs in as file=path or dir=path. Any parameter key starting with file will forget that file and any starting with dir will forget that dir, e.g.

rclone rc vfs/forget file=hello file2=goodbye dir=home/junk

This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied.

This lists the active VFSes.

It returns a list under the key "vfses" where the values are the VFS names that could be passed to the other VFS commands in the "fs" parameter.

Without any parameter given this returns the current status of the poll-interval setting.

When the interval=duration parameter is set, the poll-interval value is updated and the polling function is notified. Setting interval=0 disables poll-interval.

rclone rc vfs/poll-interval interval=5m

The timeout=duration parameter can be used to specify a time to wait for the current poll function to apply the new value. If timeout is less or equal 0, which is the default, wait indefinitely.

The new poll-interval value will only be active when the timeout is not reached.

If poll-interval is updated or disabled temporarily, some changes might not get picked up by the polling function, depending on the used remote.

This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied.

This reads the directories for the specified paths and freshens the directory cache.

If no paths are passed in then it will refresh the root directory.

rclone rc vfs/refresh

Otherwise pass directories in as dir=path. Any parameter key starting with dir will refresh that directory, e.g.

rclone rc vfs/refresh dir=home/junk dir2=data/misc

If the parameter recursive=true is given the whole directory tree will get refreshed. This refresh will use --fast-list if enabled.

This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied.

This returns stats for the selected VFS.

{

// Status of the disk cache - only present if --vfs-cache-mode > off
"diskCache": {
"bytesUsed": 0,
"erroredFiles": 0,
"files": 0,
"hashType": 1,
"outOfSpace": false,
"path": "/home/user/.cache/rclone/vfs/local/mnt/a",
"pathMeta": "/home/user/.cache/rclone/vfsMeta/local/mnt/a",
"uploadsInProgress": 0,
"uploadsQueued": 0
},
"fs": "/mnt/a",
"inUse": 1,
// Status of the in memory metadata cache
"metadataCache": {
"dirs": 1,
"files": 0
},
// Options as returned by options/get
"opt": {
"CacheMaxAge": 3600000000000,
// ...
"WriteWait": 1000000000
} }

This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied.

Rclone implements a simple HTTP based protocol.

Each endpoint takes an JSON object and returns a JSON object or an error. The JSON objects are essentially a map of string names to values.

All calls must made using POST.

The input objects can be supplied using URL parameters, POST parameters or by supplying "Content-Type: application/json" and a JSON blob in the body. There are examples of these below using curl.

The response will be a JSON blob in the body of the response. This is formatted to be reasonably human-readable.

If an error occurs then there will be an HTTP error status (e.g. 500) and the body of the response will contain a JSON encoded error object, e.g.

{

"error": "Expecting string value for key \"remote\" (was float64)",
"input": {
"fs": "/tmp",
"remote": 3
},
"status": 400
"path": "operations/rmdir", }

The keys in the error response are - error - error string - input - the input parameters to the call - status - the HTTP status code - path - the path of the call

The sever implements basic CORS support and allows all origins for that. The response to a preflight OPTIONS request will echo the requested "Access-Control-Request-Headers" back.

curl -X POST 'http://localhost:5572/rc/noop?potato=1&sausage=2'

Response

{

"potato": "1",
"sausage": "2" }

Here is what an error response looks like:

curl -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2'
{

"error": "arbitrary error on input map[potato:1 sausage:2]",
"input": {
"potato": "1",
"sausage": "2"
} }

Note that curl doesn't return errors to the shell unless you use the -f option

$ curl -f -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2'
curl: (22) The requested URL returned error: 400 Bad Request
$ echo $?
22

curl --data "potato=1" --data "sausage=2" http://localhost:5572/rc/noop

Response

{

"potato": "1",
"sausage": "2" }

Note that you can combine these with URL parameters too with the POST parameters taking precedence.

curl --data "potato=1" --data "sausage=2" "http://localhost:5572/rc/noop?rutabaga=3&sausage=4"

Response

{

"potato": "1",
"rutabaga": "3",
"sausage": "4" }

curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' http://localhost:5572/rc/noop

response

{

"password": "xyz",
"username": "xyz" }

This can be combined with URL parameters too if required. The JSON blob takes precedence.

curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' 'http://localhost:5572/rc/noop?rutabaga=3&potato=4'
{

"potato": 2,
"rutabaga": "3",
"sausage": 1 }

If you use the --rc flag this will also enable the use of the go profiling tools on the same port.

To use these, first install go (https://golang.org/doc/install).

To profile rclone's memory use you can run:

go tool pprof -web http://localhost:5572/debug/pprof/heap

This should open a page in your browser showing what is using what memory.

You can also use the -text flag to produce a textual summary

$ go tool pprof -text http://localhost:5572/debug/pprof/heap
Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total

flat flat% sum% cum cum%
1024.03kB 66.62% 66.62% 1024.03kB 66.62% github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.addDecoderNode
513kB 33.38% 100% 513kB 33.38% net/http.newBufioWriterSize
0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/cmd/all.init
0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/cmd/serve.init
0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/cmd/serve/restic.init
0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/vendor/golang.org/x/net/http2.init
0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.init
0 0% 100% 1024.03kB 66.62% github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.init.0
0 0% 100% 1024.03kB 66.62% main.init
0 0% 100% 513kB 33.38% net/http.(*conn).readRequest
0 0% 100% 513kB 33.38% net/http.(*conn).serve
0 0% 100% 1024.03kB 66.62% runtime.main

Memory leaks are most often caused by go routine leaks keeping memory alive which should have been garbage collected.

See all active go routines using

curl http://localhost:5572/debug/pprof/goroutine?debug=1

Or go to http://localhost:5572/debug/pprof/goroutine?debug=1 in your browser.

You can see a summary of profiles available at http://localhost:5572/debug/pprof/

Here is how to use some of them:

Memory: go tool pprof http://localhost:5572/debug/pprof/heap
Go routines: curl http://localhost:5572/debug/pprof/goroutine?debug=1
30-second CPU profile: go tool pprof http://localhost:5572/debug/pprof/profile
5-second execution trace: wget http://localhost:5572/debug/pprof/trace?seconds=5
Goroutine blocking profile
Enable first with: rclone rc debug/set-block-profile-rate rate=1 (docs)
go tool pprof http://localhost:5572/debug/pprof/block
Contended mutexes:
Enable first with: rclone rc debug/set-mutex-profile-fraction rate=1 (docs)
go tool pprof http://localhost:5572/debug/pprof/mutex

See the net/http/pprof docs (https://golang.org/pkg/net/http/pprof/) for more info on how to use the profiling and for a general overview see the Go team's blog post on profiling go programs (https://blog.golang.org/profiling-go-programs).

The profiling hook is zero overhead unless it is used (https://stackoverflow.com/q/26545159/164234).

Each cloud storage system is slightly different. Rclone attempts to provide a unified interface to them, but some underlying differences show through.

Here is an overview of the major features of each cloud storage system.

Name Hash ModTime Case Insensitive Duplicate Files MIME Type Metadata
1Fichier Whirlpool - No Yes R -
Akamai Netstorage MD5, SHA256 R/W No No R -
Amazon Drive MD5 - Yes No R -
Amazon S3 (or S3 compatible) MD5 R/W No No R/W RWU
Backblaze B2 SHA1 R/W No No R/W -
Box SHA1 R/W Yes No - -
Citrix ShareFile MD5 R/W Yes No - -
Dropbox DBHASH ¹ R Yes No - -
Enterprise File Fabric - R/W Yes No R/W -
FTP - R/W ¹⁰ No No - -
Google Cloud Storage MD5 R/W No No R/W -
Google Drive MD5 R/W No Yes R/W -
Google Photos - - No Yes R -
HDFS - R/W No No - -
HiDrive HiDrive ¹² R/W No No - -
HTTP - R No No R -
Hubic MD5 R/W No No R/W -
Internet Archive MD5, SHA1, CRC32 R/W ¹¹ No No - RWU
Jottacloud MD5 R/W Yes No R -
Koofr MD5 - Yes No - -
Mail.ru Cloud Mailru ⁶ R/W Yes No - -
Mega - - No Yes - -
Memory MD5 R/W No No - -
Microsoft Azure Blob Storage MD5 R/W No No R/W -
Microsoft OneDrive SHA1 ⁵ R/W Yes No R -
OpenDrive MD5 R/W Yes Partial ⁸ - -
OpenStack Swift MD5 R/W No No R/W -
pCloud MD5, SHA1 ⁷ R No No W -
premiumize.me - - Yes No R -
put.io CRC-32 R/W No Yes R -
QingStor MD5 - ⁹ No No R/W -
Seafile - - No No - -
SFTP MD5, SHA1 ² R/W Depends No - -
Sia - - No No - -
SugarSync - - No No - -
Storj - R No No - -
Uptobox - - No Yes - -
WebDAV MD5, SHA1 ³ R ⁴ Depends No - -
Yandex Disk MD5 R/W No No R -
Zoho WorkDrive - - No No - -
The local filesystem All R/W Depends No - RWU

¹ Dropbox supports its own custom hash (https://www.dropbox.com/developers/reference/content-hash). This is an SHA256 sum of all the 4 MiB block SHA256s.

² SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in the remote's PATH.

³ WebDAV supports hashes when used with Owncloud and Nextcloud only.

⁴ WebDAV supports modtimes when used with Owncloud and Nextcloud only.

⁵ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive for business and SharePoint server support Microsoft's own QuickXorHash (https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash).

⁶ Mail.ru uses its own modified SHA1 hash

⁷ pCloud only supports SHA1 (not MD5) in its EU region

⁸ Opendrive does not support creation of duplicate files using their web client interface or other stock clients, but the underlying storage platform has been determined to allow duplicate files, and it is possible to create them with rclone. It may be that this is a mistake or an unsupported feature.

⁹ QingStor does not support SetModTime for objects bigger than 5 GiB.

¹⁰ FTP supports modtimes for the major FTP servers, and also others if they advertised required protocol extensions. See this (https://rclone.org/ftp/#modified-time) for more details.

¹¹ Internet Archive requires option wait_archive to be set to a non-zero value for full modtime support.

¹² HiDrive supports its own custom hash (https://static.hidrive.com/dev/0001). It combines SHA1 sums for each 4 KiB block hierarchically to a single top-level sum.

The cloud storage system supports various hash types of the objects. The hashes are used when transferring data as an integrity check and can be specifically used with the --checksum flag in syncs and in the check command.

To use the verify checksums when transferring between cloud storage systems they must support a common hash type.

Allmost all cloud storage systems store some sort of timestamp on objects, but several of them not something that is appropriate to use for syncing. E.g. some backends will only write a timestamp that represent the time of the upload. To be relevant for syncing it should be able to store the modification time of the source object. If this is not the case, rclone will only check the file size by default, though can be configured to check the file hash (with the --checksum flag). Ideally it should also be possible to change the timestamp of an existing file without having to re-upload it.

Storage systems with a - in the ModTime column, means the modification read on objects is not the modification time of the file when uploaded. It is most likely the time the file was uploaded, or possibly something else (like the time the picture was taken in Google Photos).

Storage systems with a R (for read-only) in the ModTime column, means the it keeps modification times on objects, and updates them when uploading objects, but it does not support changing only the modification time (SetModTime operation) without re-uploading, possibly not even without deleting existing first. Some operations in rclone, such as copy and sync commands, will automatically check for SetModTime support and re-upload if necessary to keep the modification times in sync. Other commands will not work without SetModTime support, e.g. touch command on an existing file will fail, and changes to modification time only on a files in a mount will be silently ignored.

Storage systems with R/W (for read/write) in the ModTime column, means they do also support modtime-only operations.

If a cloud storage systems is case sensitive then it is possible to have two files which differ only in case, e.g. file.txt and FILE.txt. If a cloud storage system is case insensitive then that isn't possible.

This can cause problems when syncing between a case insensitive system and a case sensitive system. The symptom of this is that no matter how many times you run the sync it never completes fully.

The local filesystem and SFTP may or may not be case sensitive depending on OS.

Windows - usually case insensitive, though case is preserved
OSX - usually case insensitive, though it is possible to format case sensitive
Linux - usually case sensitive, but there are case insensitive file systems (e.g. FAT formatted USB keys)

Most of the time this doesn't cause any problems as people tend to avoid files whose name differs only by case even on case sensitive systems.

If a cloud storage system allows duplicate files then it can have two objects with the same name.

This confuses rclone greatly when syncing - use the rclone dedupe command to rename or remove duplicates.

Some cloud storage systems might have restrictions on the characters that are usable in file or directory names. When rclone detects such a name during a file upload, it will transparently replace the restricted characters with similar looking Unicode characters. To handle the different sets of restricted characters for different backends, rclone uses something it calls encoding.

This process is designed to avoid ambiguous file names as much as possible and allow to move files between many cloud storage systems transparently.

The name shown by rclone to the user or during log output will only contain a minimal set of replaced characters to ensure correct formatting and not necessarily the actual name used on the cloud storage.

This transformation is reversed when downloading a file or parsing rclone arguments. For example, when uploading a file named my file?.txt to Onedrive, it will be displayed as my file?.txt on the console, but stored as my file?.txt to Onedrive (the ? gets replaced by the similar looking ? character, the so-called "fullwidth question mark"). The reverse transformation allows to read a file unusual/name.txt from Google Drive, by passing the name unusual/name.txt on the command line (the / needs to be replaced by the similar looking / character).

The filename encoding system works well in most cases, at least where file names are written in English or similar languages. You might not even notice it: It just works. In some cases it may lead to issues, though. E.g. when file names are written in Chinese, or Japanese, where it is always the Unicode fullwidth variants of the punctuation marks that are used.

On Windows, the characters :, * and ? are examples of restricted characters. If these are used in filenames on a remote that supports it, Rclone will transparently convert them to their fullwidth Unicode variants *, ? and : when downloading to Windows, and back again when uploading. This way files with names that are not allowed on Windows can still be stored.

However, if you have files on your Windows system originally with these same Unicode characters in their names, they will be included in the same conversion process. E.g. if you create a file in your Windows filesystem with name Test:1.jpg, where : is the Unicode fullwidth colon symbol, and use rclone to upload it to Google Drive, which supports regular : (halfwidth question mark), rclone will replace the fullwidth : with the halfwidth : and store the file as Test:1.jpg in Google Drive. Since both Windows and Google Drive allows the name Test:1.jpg, it would probably be better if rclone just kept the name as is in this case.

With the opposite situation; if you have a file named Test:1.jpg, in your Google Drive, e.g. uploaded from a Linux system where : is valid in file names. Then later use rclone to copy this file to your Windows computer you will notice that on your local disk it gets renamed to Test:1.jpg. The original filename is not legal on Windows, due to the :, and rclone therefore renames it to make the copy possible. That is all good. However, this can also lead to an issue: If you already had a different file named Test:1.jpg on Windows, and then use rclone to copy either way. Rclone will then treat the file originally named Test:1.jpg on Google Drive and the file originally named Test:1.jpg on Windows as the same file, and replace the contents from one with the other.

Its virtually impossible to handle all cases like these correctly in all situations, but by customizing the encoding option, changing the set of characters that rclone should convert, you should be able to create a configuration that works well for your specific situation. See also the example (https://rclone.org/overview/#encoding-example-windows) below.

(Windows was used as an example of a file system with many restricted characters, and Google drive a storage system with few.)

The table below shows the characters that are replaced by default.

When a replacement character is found in a filename, this character will be escaped with the ‛ character to avoid ambiguous file names. (e.g. a file named ␀.txt would shown as ‛␀.txt)

Each cloud storage backend can use a different set of characters, which will be specified in the documentation for each backend.

Character Value Replacement
NUL 0x00
SOH 0x01
STX 0x02
ETX 0x03
EOT 0x04
ENQ 0x05
ACK 0x06
BEL 0x07
BS 0x08
HT 0x09
LF 0x0A
VT 0x0B
FF 0x0C
CR 0x0D
SO 0x0E
SI 0x0F
DLE 0x10
DC1 0x11
DC2 0x12
DC3 0x13
DC4 0x14
NAK 0x15
SYN 0x16
ETB 0x17
CAN 0x18
EM 0x19
SUB 0x1A
ESC 0x1B
FS 0x1C
GS 0x1D
RS 0x1E
US 0x1F
/ 0x2F
DEL 0x7F

The default encoding will also encode these file names as they are problematic with many cloud storage systems.

File name Replacement
.
.. ..

Some backends only support a sequence of well formed UTF-8 bytes as file or directory names.

In this case all invalid UTF-8 bytes will be replaced with a quoted representation of the byte value to allow uploading a file to such a backend. For example, the invalid byte 0xFE will be encoded as ‛FE.

A common source of invalid UTF-8 bytes are local filesystems, that store names in a different encoding than UTF-8 or UTF-16, like latin1. See the local filenames (https://rclone.org/local/#filenames) section for details.

Most backends have an encoding option, specified as a flag --backend-encoding where backend is the name of the backend, or as a config parameter encoding (you'll need to select the Advanced config in rclone config to see it).

This will have default value which encodes and decodes characters in such a way as to preserve the maximum number of characters (see above).

However this can be incorrect in some scenarios, for example if you have a Windows file system with Unicode fullwidth characters *, ? or :, that you want to remain as those characters on the remote rather than being translated to regular (halfwidth) *, ? and :.

The --backend-encoding flags allow you to change that. You can disable the encoding completely with --backend-encoding None or set encoding = None in the config file.

Encoding takes a comma separated list of encodings. You can see the list of all possible values by passing an invalid value to this flag, e.g. --local-encoding "help". The command rclone help flags encoding will show you the defaults for the backends.

Encoding Characters Encoded as
Asterisk *
BackQuote `
BackSlash \
Colon :
CrLf CR 0x0D, LF 0x0A ␍, ␊
Ctl All control characters 0x00-0x1F ␀␁␂␃␄␅␆␇␈␉␊␋␌␍␎␏␐␑␒␓␔␕␖␗␘␙␚␛␜␝␞␟
Del DEL 0x7F
Dollar $
Dot . or .. as entire string ., ..
DoubleQuote "
Hash #
InvalidUtf8 An invalid UTF-8 character (e.g. latin1)
LeftCrLfHtVt CR 0x0D, LF 0x0A, HT 0x09, VT 0x0B on the left of a string ␍, ␊, ␉, ␋
LeftPeriod . on the left of a string .
LeftSpace SPACE on the left of a string
LeftTilde ~ on the left of a string
LtGt <, > <, >
None No characters are encoded
Percent %
Pipe |
Question ?
RightCrLfHtVt CR 0x0D, LF 0x0A, HT 0x09, VT 0x0B on the right of a string ␍, ␊, ␉, ␋
RightPeriod . on the right of a string .
RightSpace SPACE on the right of a string
Semicolon ;
SingleQuote '
Slash /
SquareBracket [, ] [, ]

To take a specific example, the FTP backend's default encoding is

--ftp-encoding "Slash,Del,Ctl,RightSpace,Dot"

However, let's say the FTP server is running on Windows and can't have any of the invalid Windows characters in file names. You are backing up Linux servers to this FTP server which do have those characters in file names. So you would add the Windows set which are

Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot

to the existing ones, giving:

Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot,Del,RightSpace

This can be specified using the --ftp-encoding flag or using an encoding parameter in the config file.

As a nother example, take a Windows system where there is a file with name Test:1.jpg, where : is the Unicode fullwidth colon symbol. When using rclone to copy this to a remote which supports :, the regular (halfwidth) colon (such as Google Drive), you will notice that the file gets renamed to Test:1.jpg.

To avoid this you can change the set of characters rclone should convert for the local filesystem, using command-line argument --local-encoding. Rclone's default behavior on Windows corresponds to

--local-encoding "Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot"

If you want to use fullwidth characters :, * and ? in your filenames without rclone changing them when uploading to a remote, then set the same as the default value but without Colon,Question,Asterisk:

--local-encoding "Slash,LtGt,DoubleQuote,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot"

Alternatively, you can disable the conversion of any characters with --local-encoding None.

Instead of using command-line argument --local-encoding, you may also set it as environment variable (https://rclone.org/docs/#environment-variables) RCLONE_LOCAL_ENCODING, or configure (https://rclone.org/docs/#configure) a remote of type local in your config, and set the encoding option there.

The risk by doing this is that if you have a filename with the regular (halfwidth) :, * and ? in your cloud storage, and you try to download it to your Windows filesystem, this will fail. These characters are not valid in filenames on Windows, and you have told rclone not to work around this by converting them to valid fullwidth variants.

MIME types (also known as media types) classify types of documents using a simple text classification, e.g. text/html or application/pdf.

Some cloud storage systems support reading (R) the MIME type of objects and some support writing (W) the MIME type of objects.

The MIME type can be important if you are serving files directly to HTTP from the storage system.

If you are copying from a remote which supports reading (R) to a remote which supports writing (W) then rclone will preserve the MIME types. Otherwise they will be guessed from the extension, or the remote itself may assign the MIME type.

Backends may or may support reading or writing metadata. They may support reading and writing system metadata (metadata intrinsic to that backend) and/or user metadata (general purpose metadata).

The levels of metadata support are

Key Explanation
R Read only System Metadata
RW Read and write System Metadata
RWU Read and write System Metadata and read and write User Metadata

See the metadata docs (https://rclone.org/docs/#metadata) for more info.

All rclone remotes support a base command set. Other features depend upon backend-specific capabilities.

Name Purge Copy Move DirMove CleanUp ListR StreamUpload LinkSharing About EmptyDir
1Fichier No Yes Yes No No No No Yes No Yes
Akamai Netstorage Yes No No No No Yes Yes No No Yes
Amazon Drive Yes No Yes Yes No No No No No Yes
Amazon S3 (or S3 compatible) No Yes No No Yes Yes Yes Yes No No
Backblaze B2 No Yes No No Yes Yes Yes Yes No No
Box Yes Yes Yes Yes Yes ‡‡ No Yes Yes Yes Yes
Citrix ShareFile Yes Yes Yes Yes No No Yes No No Yes
Dropbox Yes Yes Yes Yes No No Yes Yes Yes Yes
Enterprise File Fabric Yes Yes Yes Yes Yes No No No No Yes
FTP No No Yes Yes No No Yes No No Yes
Google Cloud Storage Yes Yes No No No Yes Yes No No No
Google Drive Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
Google Photos No No No No No No No No No No
HDFS Yes No Yes Yes No No Yes No Yes Yes
HiDrive Yes Yes Yes Yes No No Yes No No Yes
HTTP No No No No No No No No No Yes
Hubic Yes † Yes No No No Yes Yes No Yes No
Internet Archive No Yes No No Yes Yes No Yes Yes No
Jottacloud Yes Yes Yes Yes Yes Yes No Yes Yes Yes
Koofr Yes Yes Yes Yes No No Yes Yes Yes Yes
Mail.ru Cloud Yes Yes Yes Yes Yes No No Yes Yes Yes
Mega Yes No Yes Yes Yes No No Yes Yes Yes
Memory No Yes No No No Yes Yes No No No
Microsoft Azure Blob Storage Yes Yes No No No Yes Yes No No No
Microsoft OneDrive Yes Yes Yes Yes Yes No No Yes Yes Yes
OpenDrive Yes Yes Yes Yes No No No No No Yes
OpenStack Swift Yes † Yes No No No Yes Yes No Yes No
pCloud Yes Yes Yes Yes Yes No No Yes Yes Yes
premiumize.me Yes No Yes Yes No No No Yes Yes Yes
put.io Yes No Yes Yes Yes No Yes No Yes Yes
QingStor No Yes No No Yes Yes No No No No
Seafile Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
SFTP No No Yes Yes No No Yes No Yes Yes
Sia No No No No No No Yes No No Yes
SugarSync Yes Yes Yes Yes No No Yes Yes No Yes
Storj Yes † No Yes No No Yes Yes No No No
Uptobox No Yes Yes Yes No No No No No No
WebDAV Yes Yes Yes Yes No No Yes ‡ No Yes Yes
Yandex Disk Yes Yes Yes Yes Yes No Yes Yes Yes Yes
Zoho WorkDrive Yes Yes Yes Yes No No No No Yes Yes
The local filesystem Yes No Yes Yes No No Yes No Yes Yes

This deletes a directory quicker than just deleting all the files in the directory.

† Note Swift, Hubic, and Storj implement this in order to delete directory markers but they don't actually have a quicker way of deleting files other than deleting them individually.

‡ StreamUpload is not supported with Nextcloud

Used when copying an object to and from the same remote. This known as a server-side copy so you can copy a file without downloading it and uploading it again. It is used if you use rclone copy or rclone move if the remote doesn't support Move directly.

If the server doesn't support Copy directly then for copy operations the file is downloaded then re-uploaded.

Used when moving/renaming an object on the same remote. This is known as a server-side move of a file. This is used in rclone move if the server doesn't support DirMove.

If the server isn't capable of Move then rclone simulates it with Copy then delete. If the server doesn't support Copy then rclone will download the file and re-upload it.

This is used to implement rclone move to move a directory if possible. If it isn't then it will use Move on each file (which falls back to Copy then download and upload - see Move section).

This is used for emptying the trash for a remote by rclone cleanup.

If the server can't do CleanUp then rclone cleanup will return an error.

‡‡ Note that while Box implements this it has to delete every file individually so it will be slower than emptying the trash via the WebUI

The remote supports a recursive list to list all the contents beneath a directory quickly. This enables the --fast-list flag to work. See the rclone docs (https://rclone.org/docs/#fast-list) for more details.

Some remotes allow files to be uploaded without knowing the file size in advance. This allows certain operations to work without spooling the file to local disk first, e.g. rclone rcat.

Sets the necessary permissions on a file or folder and prints a link that allows others to access them, even if they don't have an account on the particular cloud provider.

Rclone about prints quota information for a remote. Typical output includes bytes used, free, quota and in trash.

If a remote lacks about capability rclone about remote:returns an error.

Backends without about capability cannot determine free space for an rclone mount, or use policy mfs (most free space) as a member of an rclone union remote.

See rclone about command (https://rclone.org/commands/rclone_about/)

The remote supports empty directories. See Limitations (https://rclone.org/bugs/#limitations) for details. Most Object/Bucket-based remotes do not support this.

This describes the global flags available to every rclone command split into two groups, non backend and backend flags.

These flags are available for every command.


--ask-password Allow prompt for password for encrypted configuration (default true)
--auto-confirm If enabled, do not request console confirmation
--backup-dir string Make backups into hierarchy based in DIR
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
--bwlimit BwTimetable Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
--bwlimit-file BwTimetable Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
--ca-cert string CA certificate used to verify servers
--cache-dir string Directory rclone will use for caching (default "$HOME/.cache/rclone")
--check-first Do all the checks before starting transfers
--checkers int Number of checkers to run in parallel (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--config string Config file (default "$HOME/.config/rclone/rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
--cpuprofile string Write cpu profile to file
--cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
--delete-after When synchronizing, delete files on destination after transferring (default)
--delete-before When synchronizing, delete files on destination before transferring
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features (use --disable help to see a list)
--disable-http-keep-alives Disable HTTP keep-alives and use each connection once.
--disable-http2 Disable HTTP/2 in the global transport
-n, --dry-run Do a trial run with no permanent changes
--dscp string Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21
--dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info
--error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file (use - to read from stdin)
--exclude-if-present stringArray Exclude directories if filename is present
--expect-continue-timeout duration Timeout when using expect / 100-continue in HTTP (default 1s)
--fast-list Use recursive list if available; uses more memory but fewer transactions
--files-from stringArray Read list of source-file names from file (use - to read from stdin)
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file (use - to read from stdin)
--fs-cache-expire-duration duration Cache remotes for this long (0 to disable caching) (default 5m0s)
--fs-cache-expire-interval duration Interval to check for expired remotes (default 1m0s)
--header stringArray Set HTTP header for all transactions
--header-download stringArray Set HTTP header for download transactions
--header-upload stringArray Set HTTP header for upload transactions
--human-readable Print numbers in a human-readable format, sizes with suffix Ki|Mi|Gi|Ti|Pi
--ignore-case Ignore case in filters (case insensitive)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-errors Delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file (use - to read from stdin)
-i, --interactive Enable interactive mode
--kv-lock-time duration Maximum time to keep key-value database locked by process (default 1s)
--log-file string Log everything to this file
--log-format string Comma separated list of log format options (default "date,time")
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--log-systemd Activate systemd integration for the logger
--low-level-retries int Number of low level retries to do (default 10)
--max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this (default -1)
--max-duration duration Maximum duration rclone will transfer data for
--max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
--max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
--memprofile string Write memory profile to file
-M, --metadata If set, preserve metadata when copying objects
--metadata-set stringArray Add metadata key=value when uploading
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi)
--multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4)
--no-check-certificate Do not verify the server SSL certificate (insecure)
--no-check-dest Don't check the destination, copy regardless
--no-console Hide console window (supported on Windows only)
--no-gzip-encoding Don't set Accept-Encoding: gzip
--no-traverse Don't traverse destination file system on copy
--no-unicode-normalization Don't normalize unicode characters in filenames
--no-update-modtime Don't update destination mod-time if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
--password-command SpaceSepList Command for supplying password for encrypted configuration
-P, --progress Show progress during transfer
--progress-terminal-title Show progress on the terminal title (requires -P/--progress)
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server
--rc-addr string IPaddress:Port or :Port to bind server to (default "localhost:5572")
--rc-allow-origin string Set the allowed origin for CORS
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-enable-metrics Enable prometheus metrics on /metrics
--rc-files string Path to local files to serve on the HTTP server
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-job-expire-duration duration Expire finished async jobs older than this value (default 1m0s)
--rc-job-expire-interval duration Interval to check for expired async jobs (default 10s)
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-no-auth Don't require auth for certain methods
--rc-pass string Password for authentication
--rc-realm string Realm for authentication (default "rclone")
--rc-serve Enable the serving of remote objects
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--rc-template string User-specified template
--rc-user string User name for authentication
--rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
--rc-web-gui Launch WebGUI on localhost
--rc-web-gui-force-update Force update to latest version of web gui
--rc-web-gui-no-open-browser Don't open the browser automatically
--rc-web-gui-update Check and update to latest version of web gui
--refresh-times Refresh the modtime of remote files
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats (0 for no limit) (default 45)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line
--stats-one-line-date Enable --stats-one-line and add current date/time prefix
--stats-one-line-date-format string Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes ("), see https://golang.org/pkg/time/#Time.Format
--stats-unit string Show data rate in stats as either 'bits' or 'bytes' per second (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
--suffix string Suffix to add to changed files
--suffix-keep-extension Preserve the extension when using --suffix
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, e.g. KERN,USER,... (default "DAEMON")
--temp-dir string Directory rclone will use for temporary files (default "/tmp")
--timeout duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--track-renames When synchronizing, track file renames and do a server-side move if possible
--track-renames-strategy string Strategies to use when synchronizing using track-renames hash|modtime|leaf (default "hash")
--transfers int Number of file transfers to run in parallel (default 4)
-u, --update Skip files that are newer on the destination
--use-cookies Enable session cookiejar
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.1")
-v, --verbose count Print lots more stuff (repeat for more)

These flags are available for every command. They control the backends and may be set in the config file.


--acd-auth-url string Auth server URL
--acd-client-id string OAuth Client Id
--acd-client-secret string OAuth Client Secret
--acd-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi)
--acd-token string OAuth Access Token as a JSON blob
--acd-token-url string Token server url
--acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears (default 3m0s)
--alias-remote string Remote or path to alias
--azureblob-access-tier string Access tier of blob: hot, cool or archive
--azureblob-account string Storage Account Name
--azureblob-archive-tier-delete Delete archive tier blobs before overwriting
--azureblob-chunk-size SizeSuffix Upload chunk size (default 4Mi)
--azureblob-disable-checksum Don't store MD5 checksum with object metadata
--azureblob-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key
--azureblob-list-chunk int Size of blob list (default 5000)
--azureblob-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
--azureblob-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
--azureblob-msi-client-id string Object ID of the user-assigned MSI to use, if any
--azureblob-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any
--azureblob-msi-object-id string Object ID of the user-assigned MSI to use, if any
--azureblob-no-head-object If set, do not do HEAD before GET when getting objects
--azureblob-public-access string Public access level of a container: blob or container
--azureblob-sas-url string SAS URL for container level access only
--azureblob-service-principal-file string Path to file containing credentials for use with a service principal
--azureblob-upload-concurrency int Concurrency for multipart uploads (default 16)
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
--azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size (default 96Mi)
--b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w)
--b2-download-url string Custom endpoint for downloads
--b2-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--b2-endpoint string Endpoint for the service
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files
--b2-key string Application Key
--b2-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
--b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--b2-version-at Time Show file versions as they were at the specified time (default off)
--b2-versions Include old versions in directory listings
--box-access-token string Box App Primary Access Token
--box-auth-url string Auth server URL
--box-box-config-file string Box App config.json location
--box-box-sub-type string (default "user")
--box-client-id string OAuth Client Id
--box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100)
--box-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--box-list-chunk int Size of listing chunk 1-1000 (default 1000)
--box-owned-by string Only show items owned by the login (email address) passed in
--box-root-folder-id string Fill in for rclone to use a non root folder as its starting point
--box-token string OAuth Access Token as a JSON blob
--box-token-url string Token server url
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50 MiB) (default 50Mi)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cache chunk files (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk (partial file data) (default 5Mi)
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk (default 10Gi)
--cache-db-path string Directory to store file structure metadata DB (default "$HOME/.cache/rclone/cache-backend")
--cache-db-purge Clear all the cached data for this remote on start
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times, etc.) (default 6h0m0s)
--cache-plex-insecure string Skip all certificate verification when connecting to the Plex server
--cache-plex-password string The password of the Plex user (obscured)
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Cache file data on writes through the FS
--chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks (default 2Gi)
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks
--chunker-hash-type string Choose how chunker handles hash sums (default "md5")
--chunker-remote string Remote to chunk/unchunk
--combine-upstreams SpaceSepList Upstreams for combining
--compress-level int GZIP compression level (-2 to 9) (default -1)
--compress-mode string Compression mode (default "gzip")
--compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi)
--compress-remote string Remote to compress
-L, --copy-links Follow symlinks and copy the pointed to item
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact (default true)
--crypt-filename-encoding string How to encode the encrypted filename to text string (default "base32")
--crypt-filename-encryption string How to encrypt the filenames (default "standard")
--crypt-no-data-encryption Option to either encrypt file data or leave it unencrypted
--crypt-password string Password or pass phrase for encryption (obscured)
--crypt-password2 string Password or pass phrase for salt (obscured)
--crypt-remote string Remote to encrypt/decrypt
--crypt-server-side-across-configs Allow server-side operations (e.g. copy) to work across different crypt configs
--crypt-show-mapping For all files listed show how the names encrypt
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs
--drive-auth-owner-only Only consider files owned by the authenticated user
--drive-auth-url string Auth server URL
--drive-chunk-size SizeSuffix Upload chunk size (default 8Mi)
--drive-client-id string Google Application Client Id
--drive-client-secret string OAuth Client Secret
--drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut
--drive-disable-http2 Disable drive using http2 (default true)
--drive-encoding MultiEncoder The encoding for the backend (default InvalidUtf8)
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg")
--drive-formats string Deprecated: See export_formats
--drive-impersonate string Impersonate this user when using a service account
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs
--drive-keep-revision-forever Keep new head revision of each file forever
--drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000)
--drive-pacer-burst int Number of API calls to allow without sleeping (default 100)
--drive-pacer-min-sleep Duration Minimum time to sleep between API calls (default 100ms)
--drive-resource-key string Resource key for accessing a link-shared file
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive
--drive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different drive configs
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-size-as-quota Show sizes as storage quota usage, not actual size
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only
--drive-skip-dangling-shortcuts If set skip dangling shortcut files
--drive-skip-gdocs Skip google documents in all listings
--drive-skip-shortcuts If set skip shortcut files
--drive-starred-only Only show files that are starred
--drive-stop-on-download-limit Make download limit errors be fatal
--drive-stop-on-upload-limit Make upload limit errors be fatal
--drive-team-drive string ID of the Shared Drive (Team Drive)
--drive-token string OAuth Access Token as a JSON blob
--drive-token-url string Token server url
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8Mi)
--drive-use-created-date Use file created date instead of modified date
--drive-use-shared-date Use date file was shared instead of modified date
--drive-use-trash Send files to the trash instead of deleting permanently (default true)
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download (default off)
--dropbox-auth-url string Auth server URL
--dropbox-batch-commit-timeout Duration Max time to wait for a batch to finish comitting (default 10m0s)
--dropbox-batch-mode string Upload file batching sync|async|off (default "sync")
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
--dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
--dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret
--dropbox-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-shared-files Instructs rclone to work on individual shared files
--dropbox-shared-folders Instructs rclone to work on shared folders
--dropbox-token string OAuth Access Token as a JSON blob
--dropbox-token-url string Token server url
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
--fichier-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
--fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
--fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
--fichier-shared-folder string If you want to download a shared folder, add this parameter
--filefabric-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--filefabric-permanent-token string Permanent Authentication Token
--filefabric-root-folder-id string ID of the root folder
--filefabric-token string Session Token
--filefabric-token-expiry string Token expiry time
--filefabric-url string URL of the Enterprise File Fabric to connect to
--filefabric-version string Version read from the file fabric
--ftp-ask-password Allow asking for FTP password when needed
--ftp-close-timeout Duration Maximum time to wait for a response to close (default 1m0s)
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-disable-epsv Disable using EPSV even if server advertises support
--ftp-disable-mlsd Disable using MLSD even if server advertises support
--ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
--ftp-disable-utf8 Disable using UTF-8 even if server advertises support
--ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-host string FTP host to connect to
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server
--ftp-pass string FTP password (obscured)
--ftp-port int FTP port number (default 21)
--ftp-shut-timeout Duration Maximum time to wait for data connection closing status (default 1m0s)
--ftp-tls Use Implicit FTPS (FTP over TLS)
--ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32)
--ftp-user string FTP username (default "$USER")
--ftp-writing-mdtm Use MDTM to set modification time (VsFtpd quirk)
--gcs-anonymous Access public buckets and objects without credentials
--gcs-auth-url string Auth server URL
--gcs-bucket-acl string Access Control List for new buckets
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies
--gcs-client-id string OAuth Client Id
--gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects
--gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
--gcs-object-acl string Access Control List for new objects
--gcs-project-number string Project number
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage
--gcs-token string OAuth Access Token as a JSON blob
--gcs-token-url string Token server url
--gphotos-auth-url string Auth server URL
--gphotos-client-id string OAuth Client Id
--gphotos-client-secret string OAuth Client Secret
--gphotos-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gphotos-include-archived Also view and download archived media
--gphotos-read-only Set to make the Google Photos backend read only
--gphotos-read-size Set to read the size of media items
--gphotos-start-year int Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000)
--gphotos-token string OAuth Access Token as a JSON blob
--gphotos-token-url string Token server url
--hasher-auto-size SizeSuffix Auto-update checksum for files smaller than this size (disabled by default)
--hasher-hashes CommaSepList Comma separated list of supported checksum types (default md5,sha1)
--hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off)
--hasher-remote string Remote to cache checksums for (e.g. myRemote:path)
--hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy
--hdfs-encoding MultiEncoder The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
--hdfs-namenode string Hadoop name node and port
--hdfs-service-principal-name string Kerberos service principal name for the namenode
--hdfs-username string Hadoop user name
--hidrive-auth-url string Auth server URL
--hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi)
--hidrive-client-id string OAuth Client Id
--hidrive-client-secret string OAuth Client Secret
--hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary
--hidrive-encoding MultiEncoder The encoding for the backend (default Slash,Dot)
--hidrive-endpoint string Endpoint for the service (default "https://api.hidrive.strato.com/2.1")
--hidrive-root-prefix string The root/parent folder for all paths (default "/")
--hidrive-scope-access string Access permissions that rclone should use when requesting access from HiDrive (default "rw")
--hidrive-scope-role string User-level that rclone should use when requesting access from HiDrive (default "user")
--hidrive-token string OAuth Access Token as a JSON blob
--hidrive-token-url string Token server url
--hidrive-upload-concurrency int Concurrency for chunked uploads (default 4)
--hidrive-upload-cutoff SizeSuffix Cutoff/Threshold for chunked uploads (default 96Mi)
--http-headers CommaSepList Set HTTP headers for all transactions
--http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of HTTP host to connect to
--hubic-auth-url string Auth server URL
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
--hubic-client-id string OAuth Client Id
--hubic-client-secret string OAuth Client Secret
--hubic-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8)
--hubic-no-chunk Don't chunk files during streaming upload
--hubic-token string OAuth Access Token as a JSON blob
--hubic-token-url string Token server url
--internetarchive-access-key-id string IAS3 Access Key
--internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true)
--internetarchive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
--internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org")
--internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org")
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
--jottacloud-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi)
--jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them
--jottacloud-trashed-only Only show files that are in the trash
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi)
--koofr-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--koofr-endpoint string The Koofr API endpoint to use
--koofr-mountid string Mount ID of the mount to use
--koofr-password string Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
--koofr-provider string Choose your storage provider
--koofr-setmtime Does the backend support setting modification time (default true)
--koofr-user string Your user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-case-insensitive Force the filesystem to report itself as case insensitive
--local-case-sensitive Force the filesystem to report itself as case sensitive
--local-encoding MultiEncoder The encoding for the backend (default Slash,Dot)
--local-no-check-updated Don't check to see if the files change during upload
--local-no-preallocate Disable preallocation of disk space for transferred files
--local-no-set-modtime Disable setting modtime
--local-no-sparse Disable sparse files for multi-thread downloads
--local-nounc Disable UNC (long path names) conversion on Windows
--local-unicode-normalization Apply unicode NFC normalization to paths and filenames
--local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
--mailru-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--mailru-pass string Password (obscured)
--mailru-speedup-enable Skip full upload if there is another file with same data hash (default true)
--mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash) (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf")
--mailru-speedup-max-disk SizeSuffix This option allows you to disable speedup (put by hash) for large files (default 3Gi)
--mailru-speedup-max-memory SizeSuffix Files larger than the size given below will always be hashed on disk (default 32Mi)
--mailru-user string User name (usually email)
--mega-debug Output more debug from Mega
--mega-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
--mega-hard-delete Delete files permanently rather than putting them into the trash
--mega-pass string Password (obscured)
--mega-user string User name
--netstorage-account string Set the NetStorage account name
--netstorage-host string Domain+path of NetStorage host to connect to
--netstorage-protocol string Select between HTTP or HTTPS protocol (default "https")
--netstorage-secret string Set the NetStorage account secret/G2O key for authentication (obscured)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only)
--onedrive-access-scopes SpaceSepList Set scopes to be requested by rclone (default Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access)
--onedrive-auth-url string Auth server URL
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
--onedrive-drive-id string The ID of the drive to use
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
--onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
--onedrive-link-password string Set the password for links created by the link command
--onedrive-link-scope string Set the scope of the links created by the link command (default "anonymous")
--onedrive-link-type string Set the type of the links created by the link command (default "view")
--onedrive-list-chunk int Size of listing chunk (default 1000)
--onedrive-no-versions Remove all versions on modifying operations
--onedrive-region string Choose national cloud region for OneDrive (default "global")
--onedrive-root-folder-id string ID of the root folder
--onedrive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different onedrive configs
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
--opendrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
--opendrive-password string Password (obscured)
--opendrive-username string Username
--pcloud-auth-url string Auth server URL
--pcloud-client-id string OAuth Client Id
--pcloud-client-secret string OAuth Client Secret
--pcloud-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--pcloud-hostname string Hostname to connect to (default "api.pcloud.com")
--pcloud-password string Your pcloud password (obscured)
--pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default "d0")
--pcloud-token string OAuth Access Token as a JSON blob
--pcloud-token-url string Token server url
--pcloud-username string Your pcloud username
--premiumizeme-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--putio-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi)
--qingstor-connection-retries int Number of connection retries (default 3)
--qingstor-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8)
--qingstor-endpoint string Enter an endpoint URL to connection QingStor API
--qingstor-env-auth Get QingStor credentials from runtime
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-upload-concurrency int Concurrency for multipart uploads (default 1)
--qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--qingstor-zone string Zone to connect to
--s3-access-key-id string AWS Access Key ID
--s3-acl string Canned ACL used when creating buckets and storing or copying objects
--s3-bucket-acl string Canned ACL used when creating buckets
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
--s3-download-url string Custom endpoint for downloads
--s3-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
--s3-endpoint string Endpoint for S3 API
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
--s3-force-path-style If true use path style access if false use virtual hosted style (default true)
--s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
--s3-list-chunk int Size of listing chunk (response list for each ListObject S3 request) (default 1000)
--s3-list-url-encode Tristate Whether to url encode listings: true/false/unset (default unset)
--s3-list-version int Version of ListObjects to use: 1,2 or 0 for auto
--s3-location-constraint string Location constraint - must be set to match the Region
--s3-max-upload-parts int Maximum number of parts in a multipart upload (default 10000)
--s3-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
--s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
--s3-no-check-bucket If set, don't attempt to check the bucket exists or create it
--s3-no-head If set, don't HEAD uploaded objects to check integrity
--s3-no-head-object If set, do not do HEAD before GET when getting objects
--s3-profile string Profile to use in the shared credentials file
--s3-provider string Choose your S3 provider
--s3-region string Region to connect to
--s3-requester-pays Enables requester pays option when interacting with S3 bucket
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3
--s3-session-token string An AWS session token
--s3-shared-credentials-file string Path to the shared credentials file
--s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3
--s3-sse-customer-key string If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data
--s3-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key
--s3-storage-class string The storage class to use when storing new objects in S3
--s3-upload-concurrency int Concurrency for multipart uploads (default 4)
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint
--s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-v2-auth If true use v2 authentication
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
--seafile-create-library Should rclone create a library if it doesn't exist
--seafile-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
--seafile-library string Name of the library
--seafile-library-key string Library password (for encrypted libraries only) (obscured)
--seafile-pass string Password (obscured)
--seafile-url string URL of seafile host to connect to
--seafile-user string User name (usually email address)
--sftp-ask-password Allow asking for SFTP password when needed
--sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki)
--sftp-concurrency int The maximum number of outstanding requests for one file (default 64)
--sftp-disable-concurrent-reads If set don't use concurrent reads
--sftp-disable-concurrent-writes If set don't use concurrent writes
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available
--sftp-host string SSH host to connect to
--sftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--sftp-key-file string Path to PEM-encoded private key file
--sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file (obscured)
--sftp-key-pem string Raw PEM-encoded private key
--sftp-key-use-agent When set forces the usage of the ssh-agent
--sftp-known-hosts-file string Optional path to known_hosts file
--sftp-md5sum-command string The command used to read md5 hashes
--sftp-pass string SSH password, leave blank to use ssh-agent (obscured)
--sftp-path-override string Override path used by SSH shell commands
--sftp-port int SSH port number (default 22)
--sftp-pubkey-file string Optional path to public key file
--sftp-server-command string Specifies the path or command to run a sftp server on the remote host
--sftp-set-env SpaceSepList Environment variables to pass to sftp and commands
--sftp-set-modtime Set the modified time on the remote if set (default true)
--sftp-sha1sum-command string The command used to read sha1 hashes
--sftp-shell-type string The type of SSH shell on remote server, if any
--sftp-skip-links Set to skip any symlinks and any other non regular files
--sftp-subsystem string Specifies the SSH2 subsystem on the remote host (default "sftp")
--sftp-use-fstat If set use fstat instead of stat
--sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods
--sftp-user string SSH username (default "$USER")
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
--sharefile-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
--sharefile-endpoint string Endpoint for API calls
--sharefile-root-folder-id string ID of the root folder
--sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi)
--sia-api-password string Sia Daemon API Password (obscured)
--sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980")
--sia-encoding MultiEncoder The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
--sia-user-agent string Siad User Agent (default "Sia-Agent")
--skip-links Don't warn about skipped symlinks
--storj-access-grant string Access grant
--storj-api-key string API key
--storj-passphrase string Encryption passphrase
--storj-provider string Choose an authentication method (default "existing")
--storj-satellite-address string Satellite address (default "us-central-1.storj.io")
--sugarsync-access-key-id string Sugarsync Access Key ID
--sugarsync-app-id string Sugarsync App ID
--sugarsync-authorization string Sugarsync authorization
--sugarsync-authorization-expiry string Sugarsync authorization expiry
--sugarsync-deleted-id string Sugarsync deleted folder id
--sugarsync-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
--sugarsync-hard-delete Permanently delete files if true
--sugarsync-private-access-key string Sugarsync Private Access Key
--sugarsync-refresh-token string Sugarsync refresh token
--sugarsync-root-id string Sugarsync root id
--sugarsync-user string Sugarsync user
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-auth string Authentication URL for server (OS_AUTH_URL)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form
--swift-key string API key or password (OS_PASSWORD)
--swift-leave-parts-on-error If true avoid calling abort upload on a failure
--swift-no-chunk Don't chunk files during streaming upload
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID)
--union-action-policy string Policy to choose upstream on ACTION category (default "epall")
--union-cache-time int Cache time of usage and free space (in seconds) (default 120)
--union-create-policy string Policy to choose upstream on CREATE category (default "epmfs")
--union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi)
--union-search-policy string Policy to choose upstream on SEARCH category (default "ff")
--union-upstreams string List of space separated upstreams
--uptobox-access-token string Your access token
--uptobox-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
--webdav-encoding string The encoding for the backend
--webdav-headers CommaSepList Set HTTP headers for all transactions
--webdav-pass string Password (obscured)
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the WebDAV site/service/software you are using
--yandex-auth-url string Auth server URL
--yandex-client-id string OAuth Client Id
--yandex-client-secret string OAuth Client Secret
--yandex-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--yandex-hard-delete Delete files permanently rather than putting them into the trash
--yandex-token string OAuth Access Token as a JSON blob
--yandex-token-url string Token server url
--zoho-auth-url string Auth server URL
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
--zoho-encoding MultiEncoder The encoding for the backend (default Del,Ctl,InvalidUtf8)
--zoho-region string Zoho region to connect to
--zoho-token string OAuth Access Token as a JSON blob
--zoho-token-url string Token server url

Docker 1.9 has added support for creating named volumes (https://docs.docker.com/storage/volumes/) via command-line interface (https://docs.docker.com/engine/reference/commandline/volume_create/) and mounting them in containers as a way to share data between them. Since Docker 1.10 you can create named volumes with Docker Compose (https://docs.docker.com/compose/) by descriptions in docker-compose.yml (https://docs.docker.com/compose/compose-file/compose-file-v2/#volume-configuration-reference) files for use by container groups on a single host. As of Docker 1.12 volumes are supported by Docker Swarm (https://docs.docker.com/engine/swarm/key-concepts/) included with Docker Engine and created from descriptions in swarm compose v3 (https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-configuration-reference) files for use with swarm stacks across multiple cluster nodes.

Docker Volume Plugins (https://docs.docker.com/engine/extend/plugins_volume/) augment the default local volume driver included in Docker with stateful volumes shared across containers and hosts. Unlike local volumes, your data will not be deleted when such volume is removed. Plugins can run managed by the docker daemon, as a native system service (under systemd, sysv or upstart) or as a standalone executable. Rclone can run as docker volume plugin in all these modes. It interacts with the local docker daemon via plugin API (https://docs.docker.com/engine/extend/plugin_api/) and handles mounting of remote file systems into docker containers so it must run on the same host as the docker daemon or on every Swarm node.

Getting started

In the first example we will use the SFTP (https://rclone.org/sftp/) rclone volume with Docker engine on a standalone Ubuntu machine.

Start from installing Docker (https://docs.docker.com/engine/install/) on the host.

The FUSE driver is a prerequisite for rclone mounting and should be installed on host:

sudo apt-get -y install fuse

Create two directories required by rclone docker plugin:

sudo mkdir -p /var/lib/docker-plugins/rclone/config
sudo mkdir -p /var/lib/docker-plugins/rclone/cache

Install the managed rclone docker plugin for your architecture (here amd64):

docker plugin install rclone/docker-volume-rclone:amd64 args="-v" --alias rclone --grant-all-permissions
docker plugin list

Create your SFTP volume (https://rclone.org/sftp/#standard-options):

docker volume create firstvolume -d rclone -o type=sftp -o sftp-host=_hostname_ -o sftp-user=_username_ -o sftp-pass=_password_ -o allow-other=true

Note that since all options are static, you don't even have to run rclone config or create the rclone.conf file (but the config directory should still be present). In the simplest case you can use localhost as hostname and your SSH credentials as username and password. You can also change the remote path to your home directory on the host, for example -o path=/home/username.

Time to create a test container and mount the volume into it:

docker run --rm -it -v firstvolume:/mnt --workdir /mnt ubuntu:latest bash

If all goes well, you will enter the new container and change right to the mounted SFTP remote. You can type ls to list the mounted directory or otherwise play with it. Type exit when you are done. The container will stop but the volume will stay, ready to be reused. When it's not needed anymore, remove it:

docker volume list
docker volume remove firstvolume

Now let us try something more elaborate: Google Drive (https://rclone.org/drive/) volume on multi-node Docker Swarm.

You should start from installing Docker and FUSE, creating plugin directories and installing rclone plugin on every swarm node. Then setup the Swarm (https://docs.docker.com/engine/swarm/swarm-mode/).

Google Drive volumes need an access token which can be setup via web browser and will be periodically renewed by rclone. The managed plugin cannot run a browser so we will use a technique similar to the rclone setup on a headless box (https://rclone.org/remote_setup/).

Run rclone config (https://rclone.org/commands/rclone_config_create/) on another machine equipped with web browser and graphical user interface. Create the Google Drive remote (https://rclone.org/drive/#standard-options). When done, transfer the resulting rclone.conf to the Swarm cluster and save as /var/lib/docker-plugins/rclone/config/rclone.conf on every node. By default this location is accessible only to the root user so you will need appropriate privileges. The resulting config will look like this:

[gdrive]
type = drive
scope = drive
drive_id = 1234567...
root_folder_id = 0Abcd...
token = {"access_token":...}

Now create the file named example.yml with a swarm stack description like this:

version: '3'
services:

heimdall:
image: linuxserver/heimdall:latest
ports: [8080:80]
volumes: [configdata:/config] volumes:
configdata:
driver: rclone
driver_opts:
remote: 'gdrive:heimdall'
allow_other: 'true'
vfs_cache_mode: full
poll_interval: 0

and run the stack:

docker stack deploy example -c ./example.yml

After a few seconds docker will spread the parsed stack description over cluster, create the example_heimdall service on port 8080, run service containers on one or more cluster nodes and request the example_configdata volume from rclone plugins on the node hosts. You can use the following commands to confirm results:

docker service ls
docker service ps example_heimdall
docker volume ls

Point your browser to http://cluster.host.address:8080 and play with the service. Stop it with docker stack remove example when you are done. Note that the example_configdata volume(s) created on demand at the cluster nodes will not be automatically removed together with the stack but stay for future reuse. You can remove them manually by invoking the docker volume remove example_configdata command on every node.

Volumes can be created with docker volume create (https://docs.docker.com/engine/reference/commandline/volume_create/). Here are a few examples:

docker volume create vol1 -d rclone -o remote=storj: -o vfs-cache-mode=full
docker volume create vol2 -d rclone -o remote=:storj,access_grant=xxx:heimdall
docker volume create vol3 -d rclone -o type=storj -o path=heimdall -o storj-access-grant=xxx -o poll-interval=0

Note the -d rclone flag that tells docker to request volume from the rclone driver. This works even if you installed managed driver by its full name rclone/docker-volume-rclone because you provided the --alias rclone option.

Volumes can be inspected as follows:

docker volume list
docker volume inspect vol1

Rclone flags and volume options are set via the -o flag to the docker volume create command. They include backend-specific parameters as well as mount and VFS options. Also there are a few special -o options: remote, fs, type, path, mount-type and persist.

remote determines an existing remote name from the config file, with trailing colon and optionally with a remote path. See the full syntax in the rclone documentation (https://rclone.org/docs/#syntax-of-remote-paths). This option can be aliased as fs to prevent confusion with the remote parameter of such backends as crypt or alias.

The remote=:backend:dir/subdir syntax can be used to create on-the-fly (config-less) remotes (https://rclone.org/docs/#backend-path-to-dir), while the type and path options provide a simpler alternative for this. Using two split options

-o type=backend -o path=dir/subdir

is equivalent to the combined syntax

-o remote=:backend:dir/subdir

but is arguably easier to parameterize in scripts. The path part is optional.

Mount and VFS options (https://rclone.org/commands/rclone_serve_docker/#options) as well as backend parameters (https://rclone.org/flags/#backend-flags) are named like their twin command-line flags without the -- CLI prefix. Optionally you can use underscores instead of dashes in option names. For example, --vfs-cache-mode full becomes -o vfs-cache-mode=full or -o vfs_cache_mode=full. Boolean CLI flags without value will gain the true value, e.g. --allow-other becomes -o allow-other=true or -o allow_other=true.

Please note that you can provide parameters only for the backend immediately referenced by the backend type of mounted remote. If this is a wrapping backend like alias, chunker or crypt, you cannot provide options for the referred to remote or backend. This limitation is imposed by the rclone connection string parser. The only workaround is to feed plugin with rclone.conf or configure plugin arguments (see below).

mount-type determines the mount method and in general can be one of: mount, cmount, or mount2. This can be aliased as mount_type. It should be noted that the managed rclone docker plugin currently does not support the cmount method and mount2 is rarely needed. This option defaults to the first found method, which is usually mount so you generally won't need it.

persist is a reserved boolean (true/false) option. In future it will allow to persist on-the-fly remotes in the plugin rclone.conf file.

The remote value can be extended with connection strings (https://rclone.org/docs/#connection-strings) as an alternative way to supply backend parameters. This is equivalent to the -o backend options with one syntactic difference. Inside connection string the backend prefix must be dropped from parameter names but in the -o param=value array it must be present. For instance, compare the following option array

-o remote=:sftp:/home -o sftp-host=localhost

with equivalent connection string:

-o remote=:sftp,host=localhost:/home

This difference exists because flag options -o key=val include not only backend parameters but also mount/VFS flags and possibly other settings. Also it allows to discriminate the remote option from the crypt-remote (or similarly named backend parameters) and arguably simplifies scripting due to clearer value substitution.

Both Docker Swarm and Docker Compose use YAML (http://yaml.org/spec/1.2/spec.html)-formatted text files to describe groups (stacks) of containers, their properties, networks and volumes. Compose uses the compose v2 (https://docs.docker.com/compose/compose-file/compose-file-v2/#volume-configuration-reference) format, Swarm uses the compose v3 (https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-configuration-reference) format. They are mostly similar, differences are explained in the docker documentation (https://docs.docker.com/compose/compose-file/compose-versioning/#upgrading).

Volumes are described by the children of the top-level volumes: node. Each of them should be named after its volume and have at least two elements, the self-explanatory driver: rclone value and the driver_opts: structure playing the same role as -o key=val CLI flags:

volumes:

volume_name_1:
driver: rclone
driver_opts:
remote: 'gdrive:'
allow_other: 'true'
vfs_cache_mode: full
token: '{"type": "borrower", "expires": "2021-12-31"}'
poll_interval: 0

Notice a few important details: - YAML prefers _ in option names instead of -. - YAML treats single and double quotes interchangeably. Simple strings and integers can be left unquoted. - Boolean values must be quoted like 'true' or "false" because these two words are reserved by YAML. - The filesystem string is keyed with remote (or with fs). Normally you can omit quotes here, but if the string ends with colon, you must quote it like remote: "storage_box:". - YAML is picky about surrounding braces in values as this is in fact another syntax for key/value mappings (http://yaml.org/spec/1.2/spec.html#id2790832). For example, JSON access tokens usually contain double quotes and surrounding braces, so you must put them in single quotes.

Docker daemon can install plugins from an image registry and run them managed. We maintain the docker-volume-rclone (https://hub.docker.com/p/rclone/docker-volume-rclone/) plugin image on Docker Hub (https://hub.docker.com).

Rclone volume plugin requires Docker Engine >= 19.03.15

The plugin requires presence of two directories on the host before it can be installed. Note that plugin will not create them automatically. By default they must exist on host at the following locations (though you can tweak the paths): - /var/lib/docker-plugins/rclone/config is reserved for the rclone.conf config file and must exist even if it's empty and the config file is not present. - /var/lib/docker-plugins/rclone/cache holds the plugin state file as well as optional VFS caches.

You can install managed plugin (https://docs.docker.com/engine/reference/commandline/plugin_install/) with default settings as follows:

docker plugin install rclone/docker-volume-rclone:amd64 --grant-all-permissions --alias rclone

The :amd64 part of the image specification after colon is called a tag. Usually you will want to install the latest plugin for your architecture. In this case the tag will just name it, like amd64 above. The following plugin architectures are currently available: - amd64 - arm64 - arm-v7

Sometimes you might want a concrete plugin version, not the latest one. Then you should use image tag in the form :ARCHITECTURE-VERSION. For example, to install plugin version v1.56.2 on architecture arm64 you will use tag arm64-1.56.2 (note the removed v) so the full image specification becomes rclone/docker-volume-rclone:arm64-1.56.2.

We also provide the latest plugin tag, but since docker does not support multi-architecture plugins as of the time of this writing, this tag is currently an alias for amd64. By convention the latest tag is the default one and can be omitted, thus both rclone/docker-volume-rclone:latest and just rclone/docker-volume-rclone will refer to the latest plugin release for the amd64 platform.

Also the amd64 part can be omitted from the versioned rclone plugin tags. For example, rclone image reference rclone/docker-volume-rclone:amd64-1.56.2 can be abbreviated as rclone/docker-volume-rclone:1.56.2 for convenience. However, for non-intel architectures you still have to use the full tag as amd64 or latest will fail to start.

Managed plugin is in fact a special container running in a namespace separate from normal docker containers. Inside it runs the rclone serve docker command. The config and cache directories are bind-mounted into the container at start. The docker daemon connects to a unix socket created by the command inside the container. The command creates on-demand remote mounts right inside, then docker machinery propagates them through kernel mount namespaces and bind-mounts into requesting user containers.

You can tweak a few plugin settings after installation when it's disabled (not in use), for instance:

docker plugin disable rclone
docker plugin set rclone RCLONE_VERBOSE=2 config=/etc/rclone args="--vfs-cache-mode=writes --allow-other"
docker plugin enable rclone
docker plugin inspect rclone

Note that if docker refuses to disable the plugin, you should find and remove all active volumes connected with it as well as containers and swarm services that use them. This is rather tedious so please carefully plan in advance.

You can tweak the following settings: args, config, cache, HTTP_PROXY, HTTPS_PROXY, NO_PROXY and RCLONE_VERBOSE. It's your task to keep plugin settings in sync across swarm cluster nodes.

args sets command-line arguments for the rclone serve docker command (none by default). Arguments should be separated by space so you will normally want to put them in quotes on the docker plugin set (https://docs.docker.com/engine/reference/commandline/plugin_set/) command line. Both serve docker flags (https://rclone.org/commands/rclone_serve_docker/#options) and generic rclone flags (https://rclone.org/flags/) are supported, including backend parameters that will be used as defaults for volume creation. Note that plugin will fail (due to this docker bug (https://github.com/moby/moby/blob/v20.10.7/plugin/v2/plugin.go#L195)) if the args value is empty. Use e.g. args="-v" as a workaround.

config=/host/dir sets alternative host location for the config directory. Plugin will look for rclone.conf here. It's not an error if the config file is not present but the directory must exist. Please note that plugin can periodically rewrite the config file, for example when it renews storage access tokens. Keep this in mind and try to avoid races between the plugin and other instances of rclone on the host that might try to change the config simultaneously resulting in corrupted rclone.conf. You can also put stuff like private key files for SFTP remotes in this directory. Just note that it's bind-mounted inside the plugin container at the predefined path /data/config. For example, if your key file is named sftp-box1.key on the host, the corresponding volume config option should read -o sftp-key-file=/data/config/sftp-box1.key.

cache=/host/dir sets alternative host location for the cache directory. The plugin will keep VFS caches here. Also it will create and maintain the docker-plugin.state file in this directory. When the plugin is restarted or reinstalled, it will look in this file to recreate any volumes that existed previously. However, they will not be re-mounted into consuming containers after restart. Usually this is not a problem as the docker daemon normally will restart affected user containers after failures, daemon restarts or host reboots.

RCLONE_VERBOSE sets plugin verbosity from 0 (errors only, by default) to 2 (debugging). Verbosity can be also tweaked via args="-v [-v] ...". Since arguments are more generic, you will rarely need this setting. The plugin output by default feeds the docker daemon log on local host. Log entries are reflected as errors in the docker log but retain their actual level assigned by rclone in the encapsulated message string.

HTTP_PROXY, HTTPS_PROXY, NO_PROXY customize the plugin proxy settings.

You can set custom plugin options right when you install it, in one go:

docker plugin remove rclone
docker plugin install rclone/docker-volume-rclone:amd64 \

--alias rclone --grant-all-permissions \
args="-v --allow-other" config=/etc/rclone docker plugin inspect rclone

The docker plugin volume protocol doesn't provide a way for plugins to inform the docker daemon that a volume is (un-)available. As a workaround you can setup a healthcheck to verify that the mount is responding, for example:

services:

my_service:
image: my_image
healthcheck:
test: ls /path/to/rclone/mount || exit 1
interval: 1m
timeout: 15s
retries: 3
start_period: 15s

In most cases you should prefer managed mode. Moreover, MacOS and Windows do not support native Docker plugins. Please use managed mode on these systems. Proceed further only if you are on Linux.

First, install rclone (https://rclone.org/install/). You can just run it (type rclone serve docker and hit enter) for the test.

Install FUSE:

sudo apt-get -y install fuse

Download two systemd configuration files: docker-volume-rclone.service (https://raw.githubusercontent.com/rclone/rclone/master/contrib/docker-plugin/systemd/docker-volume-rclone.service) and docker-volume-rclone.socket (https://raw.githubusercontent.com/rclone/rclone/master/contrib/docker-plugin/systemd/docker-volume-rclone.socket).

Put them to the /etc/systemd/system/ directory:

cp docker-volume-plugin.service /etc/systemd/system/
cp docker-volume-plugin.socket  /etc/systemd/system/

Please note that all commands in this section must be run as root but we omit sudo prefix for brevity. Now create directories required by the service:

mkdir -p /var/lib/docker-volumes/rclone
mkdir -p /var/lib/docker-plugins/rclone/config
mkdir -p /var/lib/docker-plugins/rclone/cache

Run the docker plugin service in the socket activated mode:

systemctl daemon-reload
systemctl start docker-volume-rclone.service
systemctl enable docker-volume-rclone.socket
systemctl start docker-volume-rclone.socket
systemctl restart docker

Or run the service directly: - run systemctl daemon-reload to let systemd pick up new config - run systemctl enable docker-volume-rclone.service to make the new service start automatically when you power on your machine. - run systemctl start docker-volume-rclone.service to start the service now. - run systemctl restart docker to restart docker daemon and let it detect the new plugin socket. Note that this step is not needed in managed mode where docker knows about plugin state changes.

The two methods are equivalent from the user perspective, but I personally prefer socket activation.

You can see managed plugin settings (https://docs.docker.com/engine/extend/#debugging-plugins) with

docker plugin list
docker plugin inspect rclone

Note that docker (including latest 20.10.7) will not show actual values of args, just the defaults.

Use journalctl --unit docker to see managed plugin output as part of the docker daemon log. Note that docker reflects plugin lines as errors but their actual level can be seen from encapsulated message string.

You will usually install the latest version of managed plugin for your platform. Use the following commands to print the actual installed version:

PLUGID=$(docker plugin list --no-trunc | awk '/rclone/{print$1}')
sudo runc --root /run/docker/runtime-runc/plugins.moby exec $PLUGID rclone version

You can even use runc to run shell inside the plugin container:

sudo runc --root /run/docker/runtime-runc/plugins.moby exec --tty $PLUGID bash

Also you can use curl to check the plugin socket connectivity:

docker plugin list --no-trunc
PLUGID=123abc...
sudo curl -H Content-Type:application/json -XPOST -d {} --unix-socket /run/docker/plugins/$PLUGID/rclone.sock http://localhost/Plugin.Activate

though this is rarely needed.

Finally I'd like to mention a caveat with updating volume settings. Docker CLI does not have a dedicated command like docker volume update. It may be tempting to invoke docker volume create with updated options on existing volume, but there is a gotcha. The command will do nothing, it won't even return an error. I hope that docker maintainers will fix this some day. In the meantime be aware that you must remove your volume before recreating it with new settings:

docker volume remove my_vol
docker volume create my_vol -d rclone -o opt1=new_val1 ...

and verify that settings did update:

docker volume list
docker volume inspect my_vol

If docker refuses to remove the volume, you should find containers or swarm services that use it and stop them first.

Getting started

Install rclone (https://rclone.org/install/) and setup your remotes.
Bisync will create its working directory at ~/.cache/rclone/bisync on Linux or C:\Users\MyLogin\AppData\Local\rclone\bisync on Windows. Make sure that this location is writable.
Run bisync with the --resync flag, specifying the paths to the local and remote sync directory roots.
For successive sync runs, leave off the --resync flag.
Consider using a filters file for excluding unnecessary files and directories from the sync.
Consider setting up the --check-access feature for safety.
On Linux, consider setting up a crontab entry. bisync can safely run in concurrent cron jobs thanks to lock files it maintains.

Here is a typical run log (with timestamps removed for clarity):

rclone bisync /testdir/path1/ /testdir/path2/ --verbose
INFO  : Synching Path1 "/testdir/path1/" with Path2 "/testdir/path2/"
INFO  : Path1 checking for diffs
INFO  : - Path1    File is new                         - file11.txt
INFO  : - Path1    File is newer                       - file2.txt
INFO  : - Path1    File is newer                       - file5.txt
INFO  : - Path1    File is newer                       - file7.txt
INFO  : - Path1    File was deleted                    - file4.txt
INFO  : - Path1    File was deleted                    - file6.txt
INFO  : - Path1    File was deleted                    - file8.txt
INFO  : Path1:    7 changes:    1 new,    3 newer,    0 older,    3 deleted
INFO  : Path2 checking for diffs
INFO  : - Path2    File is new                         - file10.txt
INFO  : - Path2    File is newer                       - file1.txt
INFO  : - Path2    File is newer                       - file5.txt
INFO  : - Path2    File is newer                       - file6.txt
INFO  : - Path2    File was deleted                    - file3.txt
INFO  : - Path2    File was deleted                    - file7.txt
INFO  : - Path2    File was deleted                    - file8.txt
INFO  : Path2:    7 changes:    1 new,    3 newer,    0 older,    3 deleted
INFO  : Applying changes
INFO  : - Path1    Queue copy to Path2                 - /testdir/path2/file11.txt
INFO  : - Path1    Queue copy to Path2                 - /testdir/path2/file2.txt
INFO  : - Path2    Queue delete                        - /testdir/path2/file4.txt
NOTICE: - WARNING  New or changed in both paths        - file5.txt
NOTICE: - Path1    Renaming Path1 copy                 - /testdir/path1/file5.txt..path1
NOTICE: - Path1    Queue copy to Path2                 - /testdir/path2/file5.txt..path1
NOTICE: - Path2    Renaming Path2 copy                 - /testdir/path2/file5.txt..path2
NOTICE: - Path2    Queue copy to Path1                 - /testdir/path1/file5.txt..path2
INFO  : - Path2    Queue copy to Path1                 - /testdir/path1/file6.txt
INFO  : - Path1    Queue copy to Path2                 - /testdir/path2/file7.txt
INFO  : - Path2    Queue copy to Path1                 - /testdir/path1/file1.txt
INFO  : - Path2    Queue copy to Path1                 - /testdir/path1/file10.txt
INFO  : - Path1    Queue delete                        - /testdir/path1/file3.txt
INFO  : - Path2    Do queued copies to                 - Path1
INFO  : - Path1    Do queued copies to                 - Path2
INFO  : -          Do queued deletes on                - Path1
INFO  : -          Do queued deletes on                - Path2
INFO  : Updating listings
INFO  : Validating listings for Path1 "/testdir/path1/" vs Path2 "/testdir/path2/"
INFO  : Bisync successful

$ rclone bisync --help
Usage:

rclone bisync remote1:path1 remote2:path2 [flags] Positional arguments:
Path1, Path2 Local path, or remote storage with ':' plus optional path.
Type 'rclone listremotes' for list of configured remotes. Optional Flags:
--check-access Ensure expected `RCLONE_TEST` files are found on
both Path1 and Path2 filesystems, else abort.
--check-filename FILENAME Filename for `--check-access` (default: `RCLONE_TEST`)
--check-sync CHOICE Controls comparison of final listings:
`true | false | only` (default: true)
If set to `only`, bisync will only compare listings
from the last run but skip actual sync.
--filters-file PATH Read filtering patterns from a file
--max-delete PERCENT Safety check on maximum percentage of deleted files allowed.
If exceeded, the bisync run will abort. (default: 50%)
--force Bypass `--max-delete` safety check and run the sync.
Consider using with `--verbose`
--remove-empty-dirs Remove empty directories at the final cleanup step.
-1, --resync Performs the resync run.
Warning: Path1 files may overwrite Path2 versions.
Consider using `--verbose` or `--dry-run` first.
--localtime Use local time in listings (default: UTC)
--no-cleanup Retain working files (useful for troubleshooting and testing).
--workdir PATH Use custom working directory (useful for testing).
(default: `~/.cache/rclone/bisync`)
-n, --dry-run Go through the motions - No files are copied/deleted.
-v, --verbose Increases logging verbosity.
May be specified more than once for more details.
-h, --help help for bisync

Arbitrary rclone flags may be specified on the bisync command line (https://rclone.org/commands/rclone_bisync/), for example rclone bisync ./testdir/path1/ gdrive:testdir/path2/ --drive-skip-gdocs -v -v --timeout 10s Note that interactions of various rclone flags with bisync process flow has not been fully tested yet.

Path1 and Path2 arguments may be references to any mix of local directory paths (absolute or relative), UNC paths (//server/share/path), Windows drive paths (with a drive letter and :) or configured remotes (https://rclone.org/docs/#syntax-of-remote-paths) with optional subdirectory paths. Cloud references are distinguished by having a : in the argument (see Windows support below).

Path1 and Path2 are treated equally, in that neither has priority for file changes, and access efficiency does not change whether a remote is on Path1 or Path2.

The listings in bisync working directory (default: ~/.cache/rclone/bisync) are named based on the Path1 and Path2 arguments so that separate syncs to individual directories within the tree may be set up, e.g.: path_to_local_tree..dropbox_subdir.lst.

Any empty directories after the sync on both the Path1 and Path2 filesystems are not deleted by default. If the --remove-empty-dirs flag is specified, then both paths will have any empty directories purged as the last step in the process.

This will effectively make both Path1 and Path2 filesystems contain a matching superset of all files. Path2 files that do not exist in Path1 will be copied to Path1, and the process will then sync the Path1 tree to Path2.

The base directories on the both Path1 and Path2 filesystems must exist or bisync will fail. This is required for safety - that bisync can verify that both paths are valid.

When using --resync a newer version of a file on the Path2 filesystem will be overwritten by the Path1 filesystem version. Carefully evaluate deltas using --dry-run (https://rclone.org/flags/#non-backend-flags).

For a resync run, one of the paths may be empty (no files in the path tree). The resync run should result in files on both paths, else a normal non-resync run will fail.

For a non-resync run, either path being empty (no files in the tree) fails with Empty current PathN listing. Cannot sync to an empty directory: X.pathN.lst This is a safety check that an unexpected empty path does not result in deleting everything in the other path.

Access check files are an additional safety measure against data loss. bisync will ensure it can find matching RCLONE_TEST files in the same places in the Path1 and Path2 filesystems. Time stamps and file contents are not important, just the names and locations. Place one or more RCLONE_TEST files in the Path1 or Path2 filesystem and then do either a run without --check-access or a --resync to set matching files on both filesystems. If you have symbolic links in your sync tree it is recommended to place RCLONE_TEST files in the linked-to directory tree to protect against bisync assuming a bunch of deleted files if the linked-to tree should not be accessible. Also see the --check-filename flag.

As a safety check, if greater than the --max-delete percent of files were deleted on either the Path1 or Path2 filesystem, then bisync will abort with a warning message, without making any changes. The default --max-delete is 50%. One way to trigger this limit is to rename a directory that contains more than half of your files. This will appear to bisync as a bunch of deleted files and a bunch of new files. This safety check is intended to block bisync from deleting all of the files on both filesystems due to a temporary network access issue, or if the user had inadvertently deleted the files on one side or the other. To force the sync either set a different delete percentage limit, e.g. --max-delete 75 (allows up to 75% deletion), or use --force to bypass the check.

Also see the all files changed check.

By using rclone filter features you can exclude file types or directory sub-trees from the sync. See the bisync filters section and generic --filter-from (https://rclone.org/filtering/#filter-from-read-filtering-patterns-from-a-file) documentation. An example filters file contains filters for non-allowed files for synching with Dropbox.

If you make changes to your filters file then bisync requires a run with --resync. This is a safety feature, which avoids existing files on the Path1 and/or Path2 side from seeming to disappear from view (since they are excluded in the new listings), which would fool bisync into seeing them as deleted (as compared to the prior run listings), and then bisync would proceed to delete them for real.

To block this from happening bisync calculates an MD5 hash of the filters file and stores the hash in a .md5 file in the same place as your filters file. On the next runs with --filters-file set, bisync re-calculates the MD5 hash of the current filters file and compares it to the hash stored in .md5 file. If they don't match the run aborts with a critical error and thus forces you to do a --resync, likely avoiding a disaster.

Enabled by default, the check-sync function checks that all of the same files exist in both the Path1 and Path2 history listings. This check-sync integrity check is performed at the end of the sync run by default. Any untrapped failing copy/deletes between the two paths might result in differences between the two listings and in the untracked file content differences between the two paths. A resync run would correct the error.

Note that the default-enabled integrity check locally executes a load of both the final Path1 and Path2 listings, and thus adds to the run time of a sync. Using --check-sync=false will disable it and may significantly reduce the sync run times for very large numbers of files.

The check may be run manually with --check-sync=only. It runs only the integrity check and terminates without actually synching.

bisync retains the listings of the Path1 and Path2 filesystems from the prior run. On each successive run it will:

list files on path1 and path2, and check for changes on each side. Changes include New, Newer, Older, and Deleted files.
Propagate changes on path1 to path2, and vice-versa.

Lock file prevents multiple simultaneous runs when taking a while. This can be particularly useful if bisync is run by cron scheduler.
Handle change conflicts non-destructively by creating ..path1 and ..path2 file versions.
File system access health check using RCLONE_TEST files (see the --check-access flag).
Abort on excessive deletes - protects against a failed listing being interpreted as all the files were deleted. See the --max-delete and --force flags.
If something evil happens, bisync goes into a safe state to block damage by later runs. (See Error Handling)

Type Description Result Implementation
Path2 new File is new on Path2, does not exist on Path1 Path2 version survives rclone copy Path2 to Path1
Path2 newer File is newer on Path2, unchanged on Path1 Path2 version survives rclone copy Path2 to Path1
Path2 deleted File is deleted on Path2, unchanged on Path1 File is deleted rclone delete Path1
Path1 new File is new on Path1, does not exist on Path2 Path1 version survives rclone copy Path1 to Path2
Path1 newer File is newer on Path1, unchanged on Path2 Path1 version survives rclone copy Path1 to Path2
Path1 older File is older on Path1, unchanged on Path2 Path1 version survives rclone copy Path1 to Path2
Path2 older File is older on Path2, unchanged on Path1 Path2 version survives rclone copy Path2 to Path1
Path1 deleted File no longer exists on Path1 File is deleted rclone delete Path2

Type Description Result Implementation
Path1 new AND Path2 new File is new on Path1 AND new on Path2 Files renamed to _Path1 and _Path2 rclone copy _Path2 file to Path1, rclone copy _Path1 file to Path2
Path2 newer AND Path1 changed File is newer on Path2 AND also changed (newer/older/size) on Path1 Files renamed to _Path1 and _Path2 rclone copy _Path2 file to Path1, rclone copy _Path1 file to Path2
Path2 newer AND Path1 deleted File is newer on Path2 AND also deleted on Path1 Path2 version survives rclone copy Path2 to Path1
Path2 deleted AND Path1 changed File is deleted on Path2 AND changed (newer/older/size) on Path1 Path1 version survives rclone copy Path1 to Path2
Path1 deleted AND Path2 changed File is deleted on Path1 AND changed (newer/older/size) on Path2 Path2 version survives rclone copy Path2 to Path1

if all prior existing files on either of the filesystems have changed (e.g. timestamps have changed due to changing the system's timezone) then bisync will abort without making any changes. Any new files are not considered for this check. You could use --force to force the sync (whichever side has the changed timestamp files wins). Alternately, a --resync may be used (Path1 versions will be pushed to Path2). Consider the situation carefully and perhaps use --dry-run before you commit to the changes.

Modification time

Bisync relies on file timestamps to identify changed files and will refuse to operate if backend lacks the modification time support.

If you or your application should change the content of a file without changing the modification time then bisync will not notice the change, and thus will not copy it to the other side.

Note that on some cloud storage systems it is not possible to have file timestamps that match precisely between the local and other filesystems.

Bisync's approach to this problem is by tracking the changes on each side separately over time with a local database of files in that side then applying the resulting changes on the other side.

Certain bisync critical errors, such as file copy/move failing, will result in a bisync lockout of following runs. The lockout is asserted because the sync status and history of the Path1 and Path2 filesystems cannot be trusted, so it is safer to block any further changes until someone checks things out. The recovery is to do a --resync again.

It is recommended to use --resync --dry-run --verbose initially and carefully review what changes will be made before running the --resync without --dry-run.

Most of these events come up due to a error status from an internal call. On such a critical error the {...}.path1.lst and {...}.path2.lst listing files are renamed to extension .lst-err, which blocks any future bisync runs (since the normal .lst files are not found). Bisync keeps them under bisync subdirectory of the rclone cache direcory, typically at ${HOME}/.cache/rclone/bisync/ on Linux.

Some errors are considered temporary and re-running the bisync is not blocked. The critical return blocks further bisync runs.

When bisync is running, a lock file is created in the bisync working directory, typically at ~/.cache/rclone/bisync/PATH1..PATH2.lck on Linux. If bisync should crash or hang, the lock file will remain in place and block any further runs of bisync for the same paths. Delete the lock file as part of debugging the situation. The lock file effectively blocks follow-on (e.g., scheduled by cron) runs when the prior invocation is taking a long time. The lock file contains PID of the blocking process, which may help in debug.

Note that while concurrent bisync runs are allowed, be very cautious that there is no overlap in the trees being synched between concurrent runs, lest there be replicated files, deleted files and general mayhem.

rclone bisync returns the following codes to calling program: - 0 on a successful run, - 1 for a non-critical failing run (a rerun may be successful), - 2 for a critically aborted run (requires a --resync to recover).

Bisync is considered BETA and has been tested with the following backends: - Local filesystem - Google Drive - Dropbox - OneDrive - S3 - SFTP - Yandex Disk

It has not been fully tested with other services yet. If it works, or sorta works, please let us know and we'll update the list. Run the test suite to check for proper operation as described below.

First release of rclone bisync requires that underlying backend supported the modification time feature and will refuse to run otherwise. This limitation will be lifted in a future rclone bisync release.

When using Local, FTP or SFTP remotes rclone does not create temporary files at the destination when copying, and thus if the connection is lost the created file may be corrupt, which will likely propagate back to the original path on the next sync, resulting in data loss. This will be solved in a future release, there is no workaround at the moment.

Files that change during a bisync run may result in data loss. This has been seen in a highly dynamic environment, where the filesystem is getting hammered by running processes during the sync. The solution is to sync at quiet times or filter out unnecessary directories and files.

New empty directories on one path are not propagated to the other side. This is because bisync (and rclone) natively works on files not directories. The following sequence is a workaround but will not propagate the delete of an empty directory to the other side:

rclone bisync PATH1 PATH2
rclone copy PATH1 PATH2 --filter "+ */" --filter "- **" --create-empty-src-dirs
rclone copy PATH2 PATH2 --filter "+ */" --filter "- **" --create-empty-src-dirs

Renaming a folder on the Path1 side results is deleting all files on the Path2 side and then copying all files again from Path1 to Path2. Bisync sees this as all files in the old directory name as deleted and all files in the new directory name as new. Similarly, renaming a directory on both sides to the same name will result in creating ..path1 and ..path2 files on both sides. Currently the most effective and efficient method of renaming a directory is to rename it on both sides, then do a --resync.

Synching with case-insensitive filesystems, such as Windows or Box, can result in file name conflicts. This will be fixed in a future release. The near term workaround is to make sure that files on both sides don't have spelling case differences (Smile.jpg vs. smile.jpg).

Bisync has been tested on Windows 8.1, Windows 10 Pro 64-bit and on Windows Github runners.

Drive letters are allowed, including drive letters mapped to network drives (rclone bisync J:\localsync GDrive:). If a drive letter is omitted, the shell current drive is the default. Drive letters are a single character follows by :, so cloud names must be more than one character long.

Absolute paths (with or without a drive letter), and relative paths (with or without a drive letter) are supported.

Working directory is created at C:\Users\MyLogin\AppData\Local\rclone\bisync.

Note that bisync output may show a mix of forward / and back \ slashes.

Be careful of case independent directory and file naming on Windows vs. case dependent Linux

See filtering documentation (https://rclone.org/filtering/) for how filter rules are written and interpreted.

Bisync's --filters-file flag slightly extends the rclone's --filter-from (https://rclone.org/filtering/#filter-from-read-filtering-patterns-from-a-file) filtering mechanism. For a given bisync run you may provide only one --filters-file. The --include*, --exclude*, and --filter flags are also supported.

Filtering portions of the directory tree is a critical feature for synching.

Examples of directory trees (always beneath the Path1/Path2 root level) you may want to exclude from your sync: - Directory trees containing only software build intermediate files. - Directory trees containing application temporary files and data such as the Windows C:\Users\MyLogin\AppData\ tree. - Directory trees containing files that are large, less important, or are getting thrashed continuously by ongoing processes.

On the other hand, there may be only select directories that you actually want to sync, and exclude all others. See the Example include-style filters for Windows user directories below.

1.
Begin with excluding directory trees:
e.g. `- /AppData/`
** on the end is not necessary. Once a given directory level is excluded then everything beneath it won't be looked at by rclone.
Exclude such directories that are unneeded, are big, dynamically thrashed, or where there may be access permission issues.
Excluding such dirs first will make rclone operations (much) faster.
Specific files may also be excluded, as with the Dropbox exclusions example below.
2.
Decide if its easier (or cleaner) to:
Include select directories and therefore exclude everything else -- or --
Exclude select directories and therefore include everything else
3.
Include select directories:
Add lines like: `+ /Documents/PersonalFiles/**` to select which directories to include in the sync.
** on the end specifies to include the full depth of the specified tree.
With Include-style filters, files at the Path1/Path2 root are not included. They may be included with `+ /*`.
Place RCLONE_TEST files within these included directory trees. They will only be looked for in these directory trees.
Finish by excluding everything else by adding `- **` at the end of the filters file.
Disregard step 4.
4.
Exclude select directories:
Add more lines like in step 1. For example: -/Desktop/tempfiles/, or `- /testdir/. Again, a**` on the end is not necessary.
Do not add a `- **` in the file. Without this line, everything will be included that has not be explicitly excluded.
Disregard step 3.

A few rules for the syntax of a filter file expanding on filtering documentation (https://rclone.org/filtering/):

Lines may start with spaces and tabs - rclone strips leading whitespace.
If the first non-whitespace character is a # then the line is a comment and will be ignored.
Blank lines are ignored.
The first non-whitespace character on a filter line must be a + or -.
Exactly 1 space is allowed between the +/- and the path term.
Only forward slashes (/) are used in path terms, even on Windows.
The rest of the line is taken as the path term. Trailing whitespace is taken literally, and probably is an error.

This Windows include-style example is based on the sync root (Path1) set to C:\Users\MyLogin. The strategy is to select specific directories to be synched with a network drive (Path2).

`- /AppData/` excludes an entire tree of Windows stored stuff that need not be synched. In my case, AppData has >11 GB of stuff I don't care about, and there are some subdirectories beneath AppData that are not accessible to my user login, resulting in bisync critical aborts.
Windows creates cache files starting with both upper and lowercase NTUSER at C:\Users\MyLogin. These files may be dynamic, locked, and are generally don't care.
There are just a few directories with my data that I do want synched, in the form of `+ /. By selecting only the directory trees I want to avoid the dozen plus directories that various apps make atC:`.
Include files in the root of the sync point, C:\Users\MyLogin, by adding the `+ /*` line.
This is an Include-style filters file, therefore it ends with `- **` which excludes everything not explicitly included.
- /AppData/
- NTUSER*
- ntuser*
+ /Documents/Family/**
+ /Documents/Sketchup/**
+ /Documents/Microcapture_Photo/**
+ /Documents/Microcapture_Video/**
+ /Desktop/**
+ /Pictures/**
+ /*
- **

Note also that Windows implements several "library" links such as C:\Users\MyLogin\My Documents\My Music pointing to C:\Users\MyLogin\Music. rclone sees these as links, so you must add --links to the bisync command line if you which to follow these links. I find that I get permission errors in trying to follow the links, so I don't include the rclone --links flag, but then you get lots of Can't follow symlink... noise from rclone about not following the links. This noise can be quashed by adding --quiet to the bisync command line.

Dropbox disallows synching the listed temporary and configuration/data files. The `- ` filters exclude these files where ever they may occur in the sync tree. Consider adding similar exclusions for file types you don't need to sync, such as core dump and software build files.
bisync testing creates /testdir/ at the top level of the sync tree, and usually deletes the tree after the test. If a normal sync should run while the /testdir/ tree exists the --check-access phase may fail due to unbalanced RCLONE_TEST files. The `- /testdir/` filter blocks this tree from being synched. You don't need this exclusion if you are not doing bisync development testing.
Everything else beneath the Path1/Path2 root will be synched.
RCLONE_TEST files may be placed anywhere within the tree, including the root.

# Filter file for use with bisync
# See https://rclone.org/filtering/ for filtering rules
# NOTICE: If you make changes to this file you MUST do a --resync run.
#         Run with --dry-run to see what changes will be made.
# Dropbox wont sync some files so filter them away here.
# See https://help.dropbox.com/installs-integrations/sync-uploads/files-not-syncing
- .dropbox.attr
- ~*.tmp
- ~$*
- .~*
- desktop.ini
- .dropbox
# Used for bisync testing, so excluded from normal runs
- /testdir/
# Other example filters
#- /TiBU/
#- /Photos/

At the start of a bisync run, listings are gathered for Path1 and Path2 while using the user's --filters-file. During the check access phase, bisync scans these listings for RCLONE_TEST files. Any RCLONE_TEST files hidden by the --filters-file are not in the listings and thus not checked during the check access phase.

Here are two normal runs. The first one has a newer file on the remote. The second has no deltas between local and remote.

2021/05/16 00:24:38 INFO  : Synching Path1 "/path/to/local/tree/" with Path2 "dropbox:/"
2021/05/16 00:24:38 INFO  : Path1 checking for diffs
2021/05/16 00:24:38 INFO  : - Path1    File is new                         - file.txt
2021/05/16 00:24:38 INFO  : Path1:    1 changes:    1 new,    0 newer,    0 older,    0 deleted
2021/05/16 00:24:38 INFO  : Path2 checking for diffs
2021/05/16 00:24:38 INFO  : Applying changes
2021/05/16 00:24:38 INFO  : - Path1    Queue copy to Path2                 - dropbox:/file.txt
2021/05/16 00:24:38 INFO  : - Path1    Do queued copies to                 - Path2
2021/05/16 00:24:38 INFO  : Updating listings
2021/05/16 00:24:38 INFO  : Validating listings for Path1 "/path/to/local/tree/" vs Path2 "dropbox:/"
2021/05/16 00:24:38 INFO  : Bisync successful
2021/05/16 00:36:52 INFO  : Synching Path1 "/path/to/local/tree/" with Path2 "dropbox:/"
2021/05/16 00:36:52 INFO  : Path1 checking for diffs
2021/05/16 00:36:52 INFO  : Path2 checking for diffs
2021/05/16 00:36:52 INFO  : No changes found
2021/05/16 00:36:52 INFO  : Updating listings
2021/05/16 00:36:52 INFO  : Validating listings for Path1 "/path/to/local/tree/" vs Path2 "dropbox:/"
2021/05/16 00:36:52 INFO  : Bisync successful

The --dry-run messages may indicate that it would try to delete some files. For example, if a file is new on Path2 and does not exist on Path1 then it would normally be copied to Path1, but with --dry-run enabled those copies don't happen, which leads to the attempted delete on the Path2, blocked again by --dry-run: ... Not deleting as --dry-run.

This whole confusing situation is an artifact of the --dry-run flag. Scrutinize the proposed deletes carefully, and if the files would have been copied to Path1 then the threatened deletes on Path2 may be disregarded.

Rclone has built in retries. If you run with --verbose you'll see error and retry messages such as shown below. This is usually not a bug. If at the end of the run you see Bisync successful and not Bisync critical error or Bisync aborted then the run was successful, and you can ignore the error messages.

The following run shows an intermittent fail. Lines 5 and _6- are low level messages. Line 6 is a bubbled-up warning message, conveying the error. Rclone normally retries failing commands, so there may be numerous such messages in the log.

Since there are no final error/warning messages on line 7, rclone has recovered from failure after a retry, and the overall sync was successful.

1: 2021/05/14 00:44:12 INFO  : Synching Path1 "/path/to/local/tree" with Path2 "dropbox:"
2: 2021/05/14 00:44:12 INFO  : Path1 checking for diffs
3: 2021/05/14 00:44:12 INFO  : Path2 checking for diffs
4: 2021/05/14 00:44:12 INFO  : Path2:  113 changes:   22 new,    0 newer,    0 older,   91 deleted
5: 2021/05/14 00:44:12 ERROR : /path/to/local/tree/objects/af: error listing: unexpected end of JSON input
6: 2021/05/14 00:44:12 NOTICE: WARNING  listing try 1 failed.                 - dropbox:
7: 2021/05/14 00:44:12 INFO  : Bisync successful

This log shows a Critical failure which requires a --resync to recover from. See the Runtime Error Handling section.

2021/05/12 00:49:40 INFO  : Google drive root '': Waiting for checks to finish
2021/05/12 00:49:40 INFO  : Google drive root '': Waiting for transfers to finish
2021/05/12 00:49:40 INFO  : Google drive root '': not deleting files as there were IO errors
2021/05/12 00:49:40 ERROR : Attempt 3/3 failed with 3 errors and: not deleting files as there were IO errors
2021/05/12 00:49:40 ERROR : Failed to sync: not deleting files as there were IO errors
2021/05/12 00:49:40 NOTICE: WARNING  rclone sync try 3 failed.           - /path/to/local/tree/
2021/05/12 00:49:40 ERROR : Bisync aborted. Must run --resync to recover.

Google Drive has a filter for certain file types (.exe, .apk, et cetera) that by default cannot be copied from Google Drive to the local filesystem. If you are having problems, run with --verbose to see specifically which files are generating complaints. If the error is This file has been identified as malware or spam and cannot be downloaded, consider using the flag --drive-acknowledge-abuse (https://rclone.org/drive/#drive-acknowledge-abuse).

Google docs exist as virtual files on Google Drive and cannot be transferred to other filesystems natively. While it is possible to export a Google doc to a normal file (with .xlsx extension, for example), it is not possible to import a normal file back into a Google document.

Bisync's handling of Google Doc files is to flag them in the run log output for user's attention and ignore them for any file transfers, deletes, or syncs. They will show up with a length of -1 in the listings. This bisync run is otherwise successful:

2021/05/11 08:23:15 INFO  : Synching Path1 "/path/to/local/tree/base/" with Path2 "GDrive:"
2021/05/11 08:23:15 INFO  : ...path2.lst-new: Ignoring incorrect line: "- -1 - - 2018-07-29T08:49:30.136000000+0000 GoogleDoc.docx"
2021/05/11 08:23:15 INFO  : Bisync successful

Rclone does not yet have a built-in capability to monitor the local file system for changes and must be blindly run periodically. On Windows this can be done using a Task Scheduler, on Linux you can use Cron which is described below.

The 1st example runs a sync every 5 minutes between a local directory and an OwnCloud server, with output logged to a runlog file:

# Minute (0-59)
#      Hour (0-23)
#           Day of Month (1-31)
#                Month (1-12 or Jan-Dec)
#                     Day of Week (0-6 or Sun-Sat)
#                         Command

*/5 * * * * /path/to/rclone bisync /local/files MyCloud: --check-access --filters-file /path/to/bysync-filters.txt --log-file /path/to//bisync.log

See crontab syntax (https://www.man7.org/linux/man-pages/man1/crontab.1p.html#INPUT_FILES)). for the details of crontab time interval expressions.

If you run rclone bisync as a cron job, redirect stdout/stderr to a file. The 2nd example runs a sync to Dropbox every hour and logs all stdout (via the >>) and stderr (via 2>&1) to a log file.

0 * * * * /path/to/rclone bisync /path/to/local/dropbox Dropbox: --check-access --filters-file /home/user/filters.txt >> /path/to/logs/dropbox-run.log 2>&1

bisync can keep a local folder in sync with a cloud service, but what if you have some highly sensitive files to be synched?

Usage of a cloud service is for exchanging both routine and sensitive personal files between one's home network, one's personal notebook when on the road, and with one's work computer. The routine data is not sensitive. For the sensitive data, configure an rclone crypt remote (https://rclone.org/crypt/) to point to a subdirectory within the local disk tree that is bisync'd to Dropbox, and then set up an bisync for this local crypt directory to a directory outside of the main sync tree.

/path/to/DBoxroot is the root of my local sync tree. There are numerous subdirectories.
/path/to/DBoxroot/crypt is the root subdirectory for files that are encrypted. This local directory target is setup as an rclone crypt remote named Dropcrypt:. See rclone.conf snippet below.
/path/to/my/unencrypted/files is the root of my sensitive files - not encrypted, not within the tree synched to Dropbox.
To sync my local unencrypted files with the encrypted Dropbox versions I manually run bisync /path/to/my/unencrypted/files DropCrypt:. This step could be bundled into a script to run before and after the full Dropbox tree sync in the last step, thus actively keeping the sensitive files in sync.
bisync /path/to/DBoxroot Dropbox: runs periodically via cron, keeping my full local sync tree in sync with Dropbox.

The Dropbox client runs keeping the local tree C:\Users\MyLogin\Dropbox always in sync with Dropbox. I could have used rclone bisync instead.
A separate directory tree at C:\Users\MyLogin\Documents\DropLocal hosts the tree of unencrypted files/folders.
To sync my local unencrypted files with the encrypted Dropbox versions I manually run the following command: rclone bisync C:\Users\MyLogin\Documents\DropLocal Dropcrypt:.
The Dropbox client then syncs the changes with Dropbox.

[Dropbox]
type = dropbox
...
[Dropcrypt]
type = crypt
remote = /path/to/DBoxroot/crypt          # on the Linux server
remote = C:\Users\MyLogin\Dropbox\crypt   # on the Windows notebook
filename_encryption = standard
directory_name_encryption = true
password = ...
...

You should read this section only if you are developing for rclone. You need to have rclone source code locally to work with bisync tests.

Bisync has a dedicated test framework implemented in the bisync_test.go file located in the rclone source tree. The test suite is based on the go test command. Series of tests are stored in subdirectories below the cmd/bisync/testdata directory. Individual tests can be invoked by their directory name, e.g. go test . -case basic -remote local -remote2 gdrive: -v

Tests will make a temporary folder on remote and purge it afterwards. If during test run there are intermittent errors and rclone retries, these errors will be captured and flagged as invalid MISCOMPAREs. Rerunning the test will let it pass. Consider such failures as noise.

usage: go test ./cmd/bisync [options...]
Options:

-case NAME Name(s) of the test case(s) to run. Multiple names should
be separated by commas. You can remove the `test_` prefix
and replace `_` by `-` in test name for convenience.
If not `all`, the name(s) should map to a directory under
`./cmd/bisync/testdata`.
Use `all` to run all tests (default: all)
-remote PATH1 `local` or name of cloud service with `:` (default: local)
-remote2 PATH2 `local` or name of cloud service with `:` (default: local)
-no-compare Disable comparing test results with the golden directory
(default: compare)
-no-cleanup Disable cleanup of Path1 and Path2 testdirs.
Useful for troubleshooting. (default: cleanup)
-golden Store results in the golden directory (default: false)
This flag can be used with multiple tests.
-debug Print debug messages
-stop-at NUM Stop test after given step number. (default: run to the end)
Implies `-no-compare` and `-no-cleanup`, if the test really
ends prematurely. Only meaningful for a single test case.
-refresh-times Force refreshing the target modtime, useful for Dropbox
(default: false)
-verbose Run tests verbosely

Note: unlike rclone flags which must be prefixed by double dash (--), the test command flags can be equally prefixed by a single - or double dash.

go test . -case basic -remote local -remote2 local runs the test_basic test case using only the local filesystem, synching one local directory with another local directory. Test script output is to the console, while commands within scenario.txt have their output sent to the .../workdir/test.log file, which is finally compared to the golden copy.
The first argument after go test should be a relative name of the directory containing bisync source code. If you run tests right from there, the argument will be . (current directory) as in most examples below. If you run bisync tests from the rclone source directory, the command should be go test ./cmd/bisync ....
The test engine will mangle rclone output to ensure comparability with golden listings and logs.
Test scenarios are located in ./cmd/bisync/testdata. The test -case argument should match the full name of a subdirectory under that directory. Every test subdirectory name on disk must start with test_, this prefix can be omitted on command line for brevity. Also, underscores in the name can be replaced by dashes for convenience.
go test . -remote local -remote2 local -case all runs all tests.
Path1 and Path2 may either be the keyword local or may be names of configured cloud services. go test . -remote gdrive: -remote2 dropbox: -case basic will run the test between these two services, without transferring any files to the local filesystem.
Test run stdout and stderr console output may be directed to a file, e.g. go test . -remote gdrive: -remote2 local -case all > runlog.txt 2>&1

1.
The base setup in the initial directory of the testcase is applied on the Path1 and Path2 filesystems (via rclone copy the initial directory to Path1, then rclone sync Path1 to Path2).
2.
The commands in the scenario.txt file are applied, with output directed to the test.log file in the test working directory. Typically, the first actual command in the scenario.txt file is to do a --resync, which establishes the baseline {...}.path1.lst and {...}.path2.lst files in the test working directory (.../workdir/ relative to the temporary test directory). Various commands and listing snapshots are done within the test.
3.
Finally, the contents of the test working directory are compared to the contents of the testcase's golden directory.

Test cases are in individual directories beneath ./cmd/bisync/testdata. A command line reference to a test is understood to reference a directory beneath testdata. For example, go test ./cmd/bisync -case dry-run -remote gdrive: -remote2 local refers to the test case in ./cmd/bisync/testdata/test_dry_run.
The test working directory is located at .../workdir relative to a temporary test directory, usually under /tmp on Linux.
The local test sync tree is created at a temporary directory named like bisync.XXX under system temporary directory.
The remote test sync tree is located at a temporary directory under <remote:>/bisync.XXX/.
path1 and/or path2 subdirectories are created in a temporary directory under the respective local or cloud test remote.
By default, the Path1 and Path2 test dirs and workdir will be deleted after each test run. The -no-cleanup flag disables purging these directories when validating and debugging a given test. These directories will be flushed before running another test, independent of the -no-cleanup usage.
You will likely want to add `- /testdir/to your normal bisync--filters-fileso that normal syncs do not attempt to sync the test temporary directories, which may haveRCLONE_TESTmiscompares in some testcases which would otherwise trip the--check-accesssystem. The--check-accessmechanism is hard-coded to ignoreRCLONE_TESTfiles beneathbisync/testdata`, so the test cases may reside on the synched tree even if there are check file mismatches in the test tree.
Some Dropbox tests can fail, notably printing the following message: src and dst identical but can't set mod time without deleting and re-uploading This is expected and happens due a way Dropbox handles modificaion times. You should use the -refresh-times test flag to make up for this.
If Dropbox tests hit request limit for you and print error message too_many_requests/...: Too many requests or write operations. then follow the Dropbox App ID instructions (https://rclone.org/dropbox/#get-your-own-dropbox-app-id).

Sometimes even a slight change in the bisync source can cause little changes spread around many log files. Updating them manually would be a nighmare.

The -golden flag will store the test.log and *.lst listings from each test case into respective golden directories. Golden results will automatically contain generic strings instead of local or cloud paths which means that they should match when run with a different cloud service.

Your normal workflow might be as follows: 1. Git-clone the rclone sources locally 2. Modify bisync source and check that it builds 3. Run the whole test suite go test ./cmd/bisync -remote local 4. If some tests show log difference, recheck them individually, e.g.: go test ./cmd/bisync -remote local -case basic 5. If you are convinced with the difference, goldenize all tests at once: go test ./cmd/bisync -remote local -golden 6. Use word diff: git diff --word-diff ./cmd/bisync/testdata/. Please note that normal line-level diff is generally useless here. 7. Check the difference carefully! 8. Commit the change (git commit) only if you are sure. If unsure, save your code changes then wipe the log diffs from git: git reset [--hard].

<testname>/initial/ contains a tree of files that will be set as the initial condition on both Path1 and Path2 testdirs.
<testname>/modfiles/ contains files that will be used to modify the Path1 and/or Path2 filesystems.
<testname>/golden/ contains the expected content of the test working directory (workdir) at the completion of the testcase.
<testname>/scenario.txt contains the body of the test, in the form of various commands to modify files, run bisync, and snapshot listings. Output from these commands is captured to .../workdir/test.log for comparison to the golden files.

test <some message> Print the line to the console and to the test.log: test sync is working correctly with options x, y, z
copy-listings <prefix> Save a copy of all .lst listings in the test working directory with the specified prefix: save-listings exclude-pass-run
move-listings <prefix> Similar to copy-listings but removes the source
purge-children <dir> This will delete all child files and purge all child subdirs under given directory but keep the parent intact. This behavior is important for tests with Google Drive because removing and re-creating the parent would change its ID.
delete-file <file> Delete a single file.
delete-glob <dir> <pattern> Delete a group of files located one level deep in the given directory with names maching a given glob pattern.
touch-glob YYYY-MM-DD <dir> <pattern> Change modification time on a group of files.
touch-copy YYYY-MM-DD <source-file> <dest-dir> Change file modification time then copy it to destination.
copy-file <source-file> <dest-dir> Copy a single file to given directory.
copy-as <source-file> <dest-file> Similar to above but destination must include both directory and the new file name at destination.
copy-dir <src> <dst> and sync-dir <src> <dst> Copy/sync a directory. Equivalent of rclone copy and rclone sync.
list-dirs <dir> Equivalent to rclone lsf -R --dirs-only <dir>
bisync [options] Runs bisync against -remote and -remote2.

{testdir/} - the root dir of the testcase
{datadir/} - the modfiles dir under the testcase root
{workdir/} - the temporary test working directory
{path1/} - the root of the Path1 test directory tree
{path2/} - the root of the Path2 test directory tree
{session} - base name of the test listings
{/} - OS-specific path separator
{spc}, {tab}, {eol} - whitespace
{chr:HH} - raw byte with given hexadecimal code

Substitution results of the terms named like {dir/} will end with / (or backslash on Windows), so it is not necessary to include slash in the usage, for example delete-file {path1/}file1.txt.

This section is work in progress.

Here are a few data points for scale, execution times, and memory usage.

The first set of data was taken between a local disk to Dropbox. The speedtest.net (https://speedtest.net) download speed was ~170 Mbps, and upload speed was ~10 Mbps. 500 files (~9.5 MB each) had been already synched. 50 files were added in a new directory, each ~9.5 MB, ~475 MB total.

Change Operations and times Overall run time
500 files synched (nothing to move) 1x listings for Path1 & Path2 1.5 sec
500 files synched with --check-access 1x listings for Path1 & Path2 1.5 sec
50 new files on remote Queued 50 copies down: 27 sec 29 sec
Moved local dir Queued 50 copies up: 410 sec, 50 deletes up: 9 sec 421 sec
Moved remote dir Queued 50 copies down: 31 sec, 50 deletes down: <1 sec 33 sec
Delete local dir Queued 50 deletes up: 9 sec 13 sec

This next data is from a user's application. They had ~400GB of data over 1.96 million files being sync'ed between a Windows local disk and some remote cloud. The file full path length was on average 35 characters (which factors into load time and RAM required).

Loading the prior listing into memory (1.96 million files, listing file size 140 MB) took ~30 sec and occupied about 1 GB of RAM.
Getting a fresh listing of the local file system (producing the 140 MB output file) took about XXX sec.
Getting a fresh listing of the remote file system (producing the 140 MB output file) took about XXX sec. The network download speed was measured at XXX Mb/s.
Once the prior and current Path1 and Path2 listings were loaded (a total of four to be loaded, two at a time), determining the deltas was pretty quick (a few seconds for this test case), and the transfer time for any files to be copied was dominated by the network bandwidth.

rclone's bisync implementation was derived from the rclonesync-V2 (https://github.com/cjnaz/rclonesync-V2) project, including documentation and test mechanisms, with [@cjnaz](https://github.com/cjnaz)'s full support and encouragement.

rclone bisync is similar in nature to a range of other projects:

unison (https://github.com/bcpierce00/unison)
syncthing (https://github.com/syncthing/syncthing)
cjnaz/rclonesync (https://github.com/cjnaz/rclonesync-V2)
ConorWilliams/rsinc (https://github.com/ConorWilliams/rsinc)
jwink3101/syncrclone (https://github.com/Jwink3101/syncrclone)
DavideRossi/upback (https://github.com/DavideRossi/upback)

Bisync adopts the differential synchronization technique, which is based on keeping history of changes performed by both synchronizing sides. See the Dual Shadow Method section in the Neil Fraser's article (https://neil.fraser.name/writing/sync/).

Also note a number of academic publications by Benjamin Pierce (http://www.cis.upenn.edu/%7Ebcpierce/papers/index.shtml#File%20Synchronization) about Unison and synchronization in general.

This is a backend for the 1fichier (https://1fichier.com) cloud storage service. Note that a Premium subscription is required to use the API.

Paths are specified as remote:path

Paths may be as deep as required, e.g. remote:directory/subdirectory.

The initial setup for 1Fichier involves getting the API key from the website which you need to do in your browser.

Here is an example of how to make a remote called remote. First run:


rclone config

This will guide you through an interactive setup process:

No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
XX / 1Fichier

\ "fichier" [snip] Storage> fichier ** See help for fichier backend at: https://rclone.org/fichier/ ** Your API Key, get it from https://1fichier.com/console/params.pl Enter a string value. Press Enter for the default (""). api_key> example_key Edit advanced config? (y/n) y) Yes n) No y/n> Remote config -------------------- [remote] type = fichier api_key = example_key -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

Once configured you can then use rclone like this,

List directories in top level of your 1Fichier account

rclone lsd remote:

List all the files in your 1Fichier account

rclone ls remote:

To copy a local directory to a 1Fichier directory called backup

rclone copy /home/source remote:backup

Modified time and hashes

1Fichier does not support modification times. It supports the Whirlpool hash algorithm.

Duplicated files

1Fichier can have two files with exactly the same name and path (unlike a normal file system).

Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.

Restricted filename characters

In addition to the default restricted characters set (https://rclone.org/overview/#restricted-characters) the following characters are also replaced:

Character Value Replacement
\ 0x5C
< 0x3C
> 0x3E
" 0x22
$ 0x24
` 0x60
' 0x27

File names can also not start or end with the following characters. These only get replaced if they are the first or last character in the name:

Character Value Replacement
SP 0x20

Invalid UTF-8 bytes will also be replaced (https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings.

Standard options

Here are the Standard options specific to fichier (1Fichier).

Your API Key, get it from https://1fichier.com/console/params.pl.

Properties:

Config: api_key
Env Var: RCLONE_FICHIER_API_KEY
Type: string
Required: false

Advanced options

Here are the Advanced options specific to fichier (1Fichier).

If you want to download a shared folder, add this parameter.

Properties:

Config: shared_folder
Env Var: RCLONE_FICHIER_SHARED_FOLDER
Type: string
Required: false

If you want to download a shared file that is password protected, add this parameter.

NB Input to this must be obscured - see rclone obscure (https://rclone.org/commands/rclone_obscure/).

Properties:

Config: file_password
Env Var: RCLONE_FICHIER_FILE_PASSWORD
Type: string
Required: false

If you want to list the files in a shared folder that is password protected, add this parameter.

NB Input to this must be obscured - see rclone obscure (https://rclone.org/commands/rclone_obscure/).

Properties:

Config: folder_password
Env Var: RCLONE_FICHIER_FOLDER_PASSWORD
Type: string
Required: false

The encoding for the backend.

See the encoding section in the overview (https://rclone.org/overview/#encoding) for more info.

Properties:

Config: encoding
Env Var: RCLONE_FICHIER_ENCODING
Type: MultiEncoder
Default: Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot

rclone about is not supported by the 1Fichier backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

See List of backends that do not support rclone about (https://rclone.org/overview/#optional-features) and rclone about (https://rclone.org/commands/rclone_about/)

The alias remote provides a new name for another remote.

Paths may be as deep as required or a local path, e.g. remote:directory/subdirectory or /directory/subdirectory.

During the initial setup with rclone config you will specify the target remote. The target remote can either be a local path or another remote.

Subfolders can be used in target remote. Assume an alias remote named backup with the target mydrive:private/backup. Invoking rclone mkdir backup:desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop.

There will be no special handling of paths containing .. segments. Invoking rclone mkdir backup:../desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop. The empty path is not allowed as a remote. To alias the current directory use . instead.

Here is an example of how to make an alias called remote for local folder. First run:


rclone config

This will guide you through an interactive setup process:

No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Alias for an existing remote

\ "alias" [snip] Storage> alias Remote or path to alias. Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path". remote> /mnt/storage/backup Remote config -------------------- [remote] remote = /mnt/storage/backup -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== remote alias e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> q

Once configured you can then use rclone like this,

List directories in top level in /mnt/storage/backup

rclone lsd remote:

List all the files in /mnt/storage/backup

rclone ls remote:

Copy another local directory to the alias directory called source

rclone copy /home/source remote:source

Standard options

Here are the Standard options specific to alias (Alias for an existing remote).

Remote or path to alias.

Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".

Properties:

Config: remote
Env Var: RCLONE_ALIAS_REMOTE
Type: string
Required: true

Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage service run by Amazon for consumers.

Important: rclone supports Amazon Drive only if you have your own set of API keys. Unfortunately the Amazon Drive developer program (https://developer.amazon.com/amazon-drive) is now closed to new entries so if you don't already have your own set of keys you will not be able to use rclone with Amazon Drive.

For the history on why rclone no longer has a set of Amazon Drive API keys see the forum (https://forum.rclone.org/t/rclone-has-been-banned-from-amazon-drive/2314).

If you happen to know anyone who works at Amazon then please ask them to re-instate rclone into the Amazon Drive developer program - thanks!

The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. rclone config walks you through it.

The configuration process for Amazon Drive may involve using an oauth proxy (https://github.com/ncw/oauthproxy). This is used to keep the Amazon credentials out of the source code. The proxy runs in Google's very secure App Engine environment and doesn't store any credentials which pass through it.

Since rclone doesn't currently have its own Amazon Drive credentials so you will either need to have your own client_id and client_secret with Amazon Drive, or use a third-party oauth proxy in which case you will need to enter client_id, client_secret, auth_url and token_url.

Note also if you are not using Amazon's auth_url and token_url, (ie you filled in something for those) then if setting up on a remote machine you can only use the copying the config method of configuration (https://rclone.org/remote_setup/#configuring-by-copying-the-config-file) - rclone authorize will not work.

Here is an example of how to make a remote called remote. First run:


rclone config

This will guide you through an interactive setup process:

No remotes found, make a new one?
n) New remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
n/r/c/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Amazon Drive

\ "amazon cloud drive" [snip] Storage> amazon cloud drive Amazon Application Client Id - required. client_id> your client ID goes here Amazon Application Client Secret - required. client_secret> your client secret goes here Auth server URL - leave blank to use Amazon's. auth_url> Optional auth URL Token server url - leave blank to use Amazon's. token_url> Optional token URL Remote config Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config. Use auto config?
* Say Y if not sure
* Say N if you are working on a remote or headless machine y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] client_id = your client ID goes here client_secret = your client secret goes here auth_url = Optional auth URL token_url = Optional token URL token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

See the remote setup docs (https://rclone.org/remote_setup/) for how to set it up on a machine with no Internet browser available.

Note that rclone runs a webserver on your local machine to collect the token as returned from Amazon. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

Once configured you can then use rclone like this,

List directories in top level of your Amazon Drive

rclone lsd remote:

List all the files in your Amazon Drive

rclone ls remote:

To copy a local directory to an Amazon Drive directory called backup

rclone copy /home/source remote:backup

Modified time and MD5SUMs

Amazon Drive doesn't allow modification times to be changed via the API so these won't be accurate or used for syncing.

It does store MD5SUMs so for a more accurate sync, you can use the --checksum flag.

Restricted filename characters

Character Value Replacement
NUL 0x00
/ 0x2F

Invalid UTF-8 bytes will also be replaced (https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings.

Deleting files

Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon Drive website. As of November 17, 2016, files are automatically deleted by Amazon from the trash after 30 days.

Let's say you usually use amazon.co.uk. When you authenticate with rclone it will take you to an amazon.com page to log in. Your amazon.co.uk email and password should work here just fine.

Standard options

Here are the Standard options specific to amazon cloud drive (Amazon Drive).

OAuth Client Id.

Leave blank normally.

Properties:

Config: client_id
Env Var: RCLONE_ACD_CLIENT_ID
Type: string
Required: false

OAuth Client Secret.

Leave blank normally.

Properties:

Config: client_secret
Env Var: RCLONE_ACD_CLIENT_SECRET
Type: string
Required: false

Advanced options

Here are the Advanced options specific to amazon cloud drive (Amazon Drive).

OAuth Access Token as a JSON blob.

Properties:

Config: token
Env Var: RCLONE_ACD_TOKEN
Type: string
Required: false

Auth server URL.

Leave blank to use the provider defaults.

Properties:

Config: auth_url
Env Var: RCLONE_ACD_AUTH_URL
Type: string
Required: false

Token server url.

Leave blank to use the provider defaults.

Properties:

Config: token_url
Env Var: RCLONE_ACD_TOKEN_URL
Type: string
Required: false

Checkpoint for internal polling (debug).

Properties:

Config: checkpoint
Env Var: RCLONE_ACD_CHECKPOINT
Type: string
Required: false

Additional time per GiB to wait after a failed complete upload to see if it appears.

Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This happens sometimes for files over 1 GiB in size and nearly every time for files bigger than 10 GiB. This parameter controls the time rclone waits for the file to appear.

The default value for this parameter is 3 minutes per GiB, so by default it will wait 3 minutes for every GiB uploaded to see if the file appears.

You can disable this feature by setting it to 0. This may cause conflict errors as rclone retries the failed upload but the file will most likely appear correctly eventually.

These values were determined empirically by observing lots of uploads of big files for a range of file sizes.

Upload with the "-v" flag to see more info about what rclone is doing in this situation.

Properties:

Config: upload_wait_per_gb
Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB
Type: Duration
Default: 3m0s

Files >= this size will be downloaded via their tempLink.

Files this size or more will be downloaded via their "tempLink". This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10 GiB. The default for this is 9 GiB which shouldn't need to be changed.

To download files above this threshold, rclone requests a "tempLink" which downloads the file through a temporary URL directly from the underlying S3 storage.

Properties:

Config: templink_threshold
Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD
Type: SizeSuffix
Default: 9Gi

The encoding for the backend.

See the encoding section in the overview (https://rclone.org/overview/#encoding) for more info.

Properties:

Config: encoding
Env Var: RCLONE_ACD_ENCODING
Type: MultiEncoder
Default: Slash,InvalidUtf8,Dot

Note that Amazon Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see --retries flag) which should hopefully work around this problem.

Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.

At the time of writing (Jan 2016) is in the area of 50 GiB per file. This means that larger files are likely to fail.

Unfortunately there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size 50000M option to limit the maximum size of uploaded files. Note that --max-size does not split files into segments, it only ignores files over this size.

rclone about is not supported by the Amazon Drive backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

See List of backends that do not support rclone about (https://rclone.org/overview/#optional-features) and rclone about (https://rclone.org/commands/rclone_about/)

The S3 backend can be used with a number of different providers:

AWS S3
Alibaba Cloud (Aliyun) Object Storage System (OSS)
Ceph
China Mobile Ecloud Elastic Object Storage (EOS)
Cloudflare R2
Arvan Cloud Object Storage (AOS)
DigitalOcean Spaces
Dreamhost
Huawei OBS
IBM COS S3
IDrive e2
Minio
RackCorp Object Storage
Scaleway
Seagate Lyve Cloud
SeaweedFS
StackPath
Storj
Tencent Cloud Object Storage (COS)
Wasabi

Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.

Once you have made a remote (see the provider specific section above) you can use it like this:

See all buckets

rclone lsd remote:

Make a new bucket

rclone mkdir remote:bucket

List the contents of a bucket

rclone ls remote:bucket

Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

rclone sync -i /home/local/directory remote:bucket

Here is an example of making an s3 configuration for the AWS S3 provider. Most applies to the other providers as well, any differences are described below.

First run

rclone config

This will guide you through an interactive setup process.

No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, Ceph, ChinaMobile, ArvanCloud, Dreamhost, IBM COS, Minio, and Tencent COS

\ "s3" [snip] Storage> s3 Choose your S3 provider. Choose a number from below, or type in your own value
1 / Amazon Web Services (AWS) S3
\ "AWS"
2 / Ceph Object Storage
\ "Ceph"
3 / Digital Ocean Spaces
\ "DigitalOcean"
4 / Dreamhost DreamObjects
\ "Dreamhost"
5 / IBM COS S3
\ "IBMCOS"
6 / Minio Object Storage
\ "Minio"
7 / Wasabi Object Storage
\ "Wasabi"
8 / Any other S3 compatible provider
\ "Other" provider> 1 Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true" env_auth> 1 AWS Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> XXX AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> YYY Region to connect to. Choose a number from below, or type in your own value
/ The default endpoint - a good choice if you are unsure.
1 | US Region, Northern Virginia, or Pacific Northwest.
| Leave location constraint empty.
\ "us-east-1"
/ US East (Ohio) Region
2 | Needs location constraint us-east-2.
\ "us-east-2"
/ US West (Oregon) Region
3 | Needs location constraint us-west-2.
\ "us-west-2"
/ US West (Northern California) Region
4 | Needs location constraint us-west-1.
\ "us-west-1"
/ Canada (Central) Region
5 | Needs location constraint ca-central-1.
\ "ca-central-1"
/ EU (Ireland) Region
6 | Needs location constraint EU or eu-west-1.
\ "eu-west-1"
/ EU (London) Region
7 | Needs location constraint eu-west-2.
\ "eu-west-2"
/ EU (Frankfurt) Region
8 | Needs location constraint eu-central-1.
\ "eu-central-1"
/ Asia Pacific (Singapore) Region
9 | Needs location constraint ap-southeast-1.
\ "ap-southeast-1"
/ Asia Pacific (Sydney) Region 10 | Needs location constraint ap-southeast-2.
\ "ap-southeast-2"
/ Asia Pacific (Tokyo) Region 11 | Needs location constraint ap-northeast-1.
\ "ap-northeast-1"
/ Asia Pacific (Seoul) 12 | Needs location constraint ap-northeast-2.
\ "ap-northeast-2"
/ Asia Pacific (Mumbai) 13 | Needs location constraint ap-south-1.
\ "ap-south-1"
/ Asia Pacific (Hong Kong) Region 14 | Needs location constraint ap-east-1.
\ "ap-east-1"
/ South America (Sao Paulo) Region 15 | Needs location constraint sa-east-1.
\ "sa-east-1" region> 1 Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region. endpoint> Location constraint - must be set to match the Region. Used when creating buckets only. Choose a number from below, or type in your own value
1 / Empty for US Region, Northern Virginia, or Pacific Northwest.
\ ""
2 / US East (Ohio) Region.
\ "us-east-2"
3 / US West (Oregon) Region.
\ "us-west-2"
4 / US West (Northern California) Region.
\ "us-west-1"
5 / Canada (Central) Region.
\ "ca-central-1"
6 / EU (Ireland) Region.
\ "eu-west-1"
7 / EU (London) Region.
\ "eu-west-2"
8 / EU Region.
\ "EU"
9 / Asia Pacific (Singapore) Region.
\ "ap-southeast-1" 10 / Asia Pacific (Sydney) Region.
\ "ap-southeast-2" 11 / Asia Pacific (Tokyo) Region.
\ "ap-northeast-1" 12 / Asia Pacific (Seoul)
\ "ap-northeast-2" 13 / Asia Pacific (Mumbai)
\ "ap-south-1" 14 / Asia Pacific (Hong Kong)
\ "ap-east-1" 15 / South America (Sao Paulo) Region.
\ "sa-east-1" location_constraint> 1 Canned ACL used when creating buckets and/or storing objects in S3. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
\ "public-read"
/ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
3 | Granting this on a bucket is generally not recommended.
\ "public-read-write"
4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
\ "authenticated-read"
/ Object owner gets FULL_CONTROL. Bucket owner gets READ access.
5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-read"
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-full-control" acl> 1 The server-side encryption algorithm used when storing this object in S3. Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256" server_side_encryption> 1 The storage class to use when storing objects in S3. Choose a number from below, or type in your own value
1 / Default
\ ""
2 / Standard storage class
\ "STANDARD"
3 / Reduced redundancy storage class
\ "REDUCED_REDUNDANCY"
4 / Standard Infrequent Access storage class
\ "STANDARD_IA"
5 / One Zone Infrequent Access storage class
\ "ONEZONE_IA"
6 / Glacier storage class
\ "GLACIER"
7 / Glacier Deep Archive storage class
\ "DEEP_ARCHIVE"
8 / Intelligent-Tiering storage class
\ "INTELLIGENT_TIERING"
9 / Glacier Instant Retrieval storage class
\ "GLACIER_IR" storage_class> 1 Remote config -------------------- [remote] type = s3 provider = AWS env_auth = false access_key_id = XXX secret_access_key = YYY region = us-east-1 endpoint = location_constraint = acl = private server_side_encryption = storage_class = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d>

Modified time

The modified time is stored as metadata on the object as X-Amz-Meta-Mtime as floating point since the epoch, accurate to 1 ns.

If the modification time needs to be updated rclone will attempt to perform a server side copy to update the modification if the object can be copied in a single part. In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archive storage the object will be uploaded rather than copied.

Note that reading this from the object takes an additional HEAD request as the metadata isn't returned in object listings.

By default, rclone will use the modification time of objects stored in S3 for syncing. This is stored in object metadata which unfortunately takes an extra HEAD request to read which can be expensive (in time and money).

The modification time is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient on S3 because it requires an extra API call to retrieve the metadata.

The extra API calls can be avoided when syncing (using rclone sync or rclone copy) in a few different ways, each with its own tradeoffs.

--size-only
Only checks the size of files.
Uses no extra transactions.
If the file doesn't change size then rclone won't detect it has changed.
rclone sync --size-only /path/to/source s3:bucket
--checksum
Checks the size and MD5 checksum of files.
Uses no extra transactions.
The most accurate detection of changes possible.
Will cause the source to read an MD5 checksum which, if it is a local disk, will cause lots of disk activity.
If the source and destination are both S3 this is the recommended flag to use for maximum efficiency.
rclone sync --checksum /path/to/source s3:bucket
--update --use-server-modtime
Uses no extra transactions.
Modification time becomes the time the object was uploaded.
For many operations this is sufficient to determine if it needs uploading.
Using --update along with --use-server-modtime, avoids the extra API call and uploads files whose local modification time is newer than the time it was last uploaded.
Files created with timestamps in the past will be missed by the sync.
rclone sync --update --use-server-modtime /path/to/source s3:bucket

These flags can and should be used in combination with --fast-list - see below.

If using rclone mount or any command using the VFS (eg rclone serve) commands then you might want to consider using the VFS flag --no-modtime which will stop rclone reading the modification time for every object. You could also use --use-server-modtime if you are happy with the modification times of the objects being the time of upload.

Rclone's default directory traversal is to process each directory individually. This takes one API call per directory. Using the --fast-list flag will read all info about the the objects into memory first using a smaller number of API calls (one per 1000 objects). See the rclone docs (https://rclone.org/docs/#fast-list) for more details.

rclone sync --fast-list --checksum /path/to/source s3:bucket

--fast-list trades off API transactions for memory use. As a rough guide rclone uses 1k of memory per object stored, so using --fast-list on a sync of a million objects will use roughly 1 GiB of RAM.

If you are only copying a small number of files into a big repository then using --no-traverse is a good idea. This finds objects directly instead of through directory listings. You can do a "top-up" sync very cheaply by using --max-age and --no-traverse to copy only recent files, eg

rclone copy --max-age 24h --no-traverse /path/to/source s3:bucket

You'd then do a full rclone sync less often.

Note that --fast-list isn't required in the top-up sync.

By default, rclone will HEAD every object it uploads. It does this to check the object got uploaded correctly.

You can disable this with the --s3-no-head option - see there for more details.

Setting this flag increases the chance for undetected upload failures.

For small objects which weren't uploaded as multipart uploads (objects sized below --s3-upload-cutoff if uploaded with rclone) rclone uses the ETag: header as an MD5 checksum.

However for objects which were uploaded as multipart uploads or with server side encryption (SSE-AWS or SSE-C) the ETag header is no longer the MD5 sum of the data, so rclone adds an additional piece of metadata X-Amz-Meta-Md5chksum which is a base64 encoded MD5 hash (in the same format as is required for Content-MD5).

For large objects, calculating this hash can take some time so the addition of this hash can be disabled with --s3-disable-checksum. This will mean that these objects do not have an MD5 checksum.

Note that reading this from the object takes an additional HEAD request as the metadata isn't returned in object listings.

If you run rclone cleanup s3:bucket then it will remove all pending multipart uploads older than 24 hours. You can use the -i flag to see exactly what it will do. If you want more control over the expiry date then run rclone backend cleanup s3:bucket -o max-age=1h to expire all uploads older than one hour. You can use rclone backend list-multipart-uploads s3:bucket to see the pending multipart uploads.

Restricted filename characters

S3 allows any valid UTF-8 string as a key.

Invalid UTF-8 bytes will be replaced (https://rclone.org/overview/#invalid-utf8), as they can't be used in XML.

The following characters are replaced since these are problematic when dealing with the REST API:

Character Value Replacement
NUL 0x00
/ 0x2F

The encoding will also encode these file names as they don't seem to work with the SDK properly:

File name Replacement
.
.. ..

Multipart uploads

rclone supports multipart uploads with S3 which means that it can upload files bigger than 5 GiB.

Note that files uploaded both with multipart upload and through crypt remotes do not have MD5 sums.

rclone switches from single part uploads to multipart uploads at the point specified by --s3-upload-cutoff. This can be a maximum of 5 GiB and a minimum of 0 (ie always upload multipart files).

The chunk sizes used in the multipart upload are specified by --s3-chunk-size and the number of chunks uploaded concurrently is specified by --s3-upload-concurrency.

Multipart uploads will use --transfers * --s3-upload-concurrency * --s3-chunk-size extra memory. Single part uploads to not use extra memory.

Single part transfers can be faster than multipart transfers or slower depending on your latency from S3 - the more latency, the more likely single part transfers will be faster.

Increasing --s3-upload-concurrency will increase throughput (8 would be a sensible value) and increasing --s3-chunk-size also increases throughput (16M would be sensible). Increasing either of these will use more memory. The default values are high enough to gain most of the possible performance without using too much memory.

With Amazon S3 you can list buckets (rclone lsd) using any region, but you can only access the content of a bucket from the region it was created in. If you attempt to access a bucket from the wrong region, you will get an error, incorrect region, the bucket is not in 'XXX' region.

There are a number of ways to supply rclone with a set of AWS credentials, with and without using the environment.

The different authentication methods are tried in this order:

Directly in the rclone configuration file (env_auth = false in the config file):
access_key_id and secret_access_key are required.
session_token can be optionally set when using AWS STS.
Runtime configuration (env_auth = true in the config file):
Export the following environment variables before running rclone:
Access Key ID: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY
Secret Access Key: AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY
Session Token: AWS_SESSION_TOKEN (optional)
Or, use a named profile (https://docs.aws.amazon.com/cli/latest/userguide/cli-multiple-profiles.html):
Profile files are standard files used by AWS CLI tools
By default it will use the profile in your home directory (e.g. ~/.aws/credentials on unix based systems) file and the "default" profile, to change set these environment variables:
AWS_SHARED_CREDENTIALS_FILE to control which file.
AWS_PROFILE to control which profile to use.
Or, run rclone in an ECS task with an IAM role (AWS only).
Or, run rclone on an EC2 instance with an IAM role (AWS only).
Or, run rclone in an EKS pod with an IAM role that is associated with a service account (AWS only).

If none of these option actually end up providing rclone with AWS credentials then S3 interaction will be non-authenticated (see below).

When using the sync subcommand of rclone the following minimum permissions are required to be available on the bucket being written to:

ListBucket
DeleteObject
GetObject
PutObject
PutObjectACL

When using the lsd subcommand, the ListAllMyBuckets permission is required.

Example policy:

{

"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
},
"Action": [
"s3:ListBucket",
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME/*",
"arn:aws:s3:::BUCKET_NAME"
]
},
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"
}
] }

Notes on above:

1.
This is a policy that can be used when creating bucket. It assumes that USER_NAME has been created.
2.
The Resource entry must include both resource ARNs, as one implies the bucket and the other implies the bucket's objects.

For reference, here's an Ansible script (https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b) that will generate one or more buckets that will work with rclone sync.

If you are using server-side encryption with KMS then you must make sure rclone is configured with server_side_encryption = aws:kms otherwise you will find you can't transfer small objects - these will create checksum errors.

You can upload objects using the glacier storage class or transition them to glacier using a lifecycle policy (http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html). The bucket can still be synced or copied into normally, but if rclone tries to access data from the glacier storage class you will see an error like below.

2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file

In this case you need to restore (http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html) the object(s) in question before using rclone.

Note that rclone only speaks the S3 API it does not speak the Glacier Vault API, so rclone cannot directly access Glacier Vaults.

According to AWS's documentation on S3 Object Lock (https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html#object-lock-permission):

If you configure a default retention period on a bucket, requests to upload objects in such a bucket must include the Content-MD5 header.

As mentioned in the Hashes section, small files that are not uploaded as multipart, use a different tag, causing the upload to fail. A simple solution is to set the --s3-upload-cutoff 0 and force all the files to be uploaded as multipart.

Standard options

Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi).

Choose your S3 provider.

Properties:

Config: provider
Env Var: RCLONE_S3_PROVIDER
Type: string
Required: false
Examples:
"AWS"
Amazon Web Services (AWS) S3
"Alibaba"
Alibaba Cloud Object Storage System (OSS) formerly Aliyun
"Ceph"
Ceph Object Storage
"ChinaMobile"
China Mobile Ecloud Elastic Object Storage (EOS)
"Cloudflare"
Cloudflare R2 Storage
"ArvanCloud"
Arvan Cloud Object Storage (AOS)
"DigitalOcean"
Digital Ocean Spaces
"Dreamhost"
Dreamhost DreamObjects
"HuaweiOBS"
Huawei Object Storage Service
"IBMCOS"
IBM COS S3
"IDrive"
IDrive e2
"LyveCloud"
Seagate Lyve Cloud
"Minio"
Minio Object Storage
"Netease"
Netease Object Storage (NOS)
"RackCorp"
RackCorp Object Storage
"Scaleway"
Scaleway Object Storage
"SeaweedFS"
SeaweedFS S3
"StackPath"
StackPath Object Storage
"Storj"
Storj (S3 Compatible Gateway)
"TencentCOS"
Tencent Cloud Object Storage (COS)
"Wasabi"
Wasabi Object Storage
"Other"
Any other S3 compatible provider<