acd_cli documentation¶
Version 0.3.1
Contents:
Setting up acd_cli¶
Check which Python 3 version is installed on your system, e.g. by running
python3 -V
If it is Python 3.2.3, 3.3.0 or 3.3.1, you need to upgrade to a higher minor version.
You may now proceed to install using PIP, your Arch package manager or build Debian/RedHat packages.
Installation with PIP¶
If you are new to Python, worried about dependencies or about possibly messing up your system, create and activate virtualenv like so:
cd /parent/path/to/your/new/virtualenv
virtualenv acdcli
source acdcli/bin/activate
You are now safe to install and test acd_cli. When you are finished, the environment can be
disabled by simply closing your shell or running deactivate
.
Please check which pip command is appropriate for Python 3 packages in your environment. I will be using ‘pip3’ as superuser in the examples.
The recommended and most up-to-date way is to directly install the master branch from GitHub.
pip3 install --upgrade git+https://github.com/yadayada/acd_cli.git
The easiest way is to directly install from PyPI.
pip3 install --upgrade --pre acdcli
PIP Errors¶
A version incompatibility may arise with PIP when upgrading the requests package. PIP will throw the following error:
ImportError: cannot import name 'IncompleteRead'
Run these commands to fix it:
apt-get remove python3-pip
easy_install3 pip
This will remove the distribution’s pip3 package and replace it with a version that is compatible with the newer requests package.
Installation on Arch/Debian/RedHat¶
Arch Linux¶
There are two packages for Arch Linux in the AUR, acd_cli-git, which is linked to the master branch of the GitHub repository, and acd_cli, which is linked to the PyPI release.
Building deb/rpm packages¶
You will need to have fpm installed to build packages.
There is a Makefile that includes commands to build Debian packages
(make deb
) or RedHat packages (make rpm
). It will also build the required
requests-toolbelt package.
fpm may also be able to build packages for other distributions or operating systems.
Environment Variables¶
Cache Path and Settings Path¶
You will find the current path settings in the output of acd_cli -v init
.
The cache path is where acd_cli stores OAuth data, the node cache, logs etc. You
may override the cache path by setting the ACD_CLI_CACHE_PATH
environment variable.
Proxy support¶
Requests supports HTTP(S) proxies via environment
variables. Since all connections to Amazon Cloud Drive are using HTTPS, you need to
set the variable HTTPS_PROXY
. The following example shows how to do that in a bash-compatible
environment.
export HTTPS_PROXY="https://user:pass@1.2.3.4:8080/"
Locale¶
If you need non-ASCII file/directory names, please check that your system’s locale is set correctly.
Dependencies¶
FUSE¶
For the mounting feature, fuse >= 2.6 is needed according to fusepy. On a Debian-based distribution, the package should be named simply ‘fuse’.
Python Packages¶
Under normal circumstances, it should not be necessary to install the dependencies manually.
- appdirs
- colorama
- dateutils (recommended)
- requests >= 2.1.0
- requests-toolbelt (recommended)
- sqlalchemy
Recommended packages are not strictly necessary; but they will be preferred to workarounds (in the case of dateutils) and bundled modules (requests-toolbelt).
If you want to the dependencies using your distribution’s packaging system and
are using a distro based on Debian ‘jessie’, the necessary packages are
python3-appdirs python3-colorama python3-dateutil python3-requests python3-sqlalchemy
.
Uninstalling¶
Please run acd_cli delete-everything
first to delete your authentication
and node data in the cache path. Then, use pip to uninstall
pip3 uninstall acdcli
Then, revoke the permission for acd_cli_oa
to access your cloud drive in your Amazon profile,
more precisely at https://www.amazon.com/ap/adam.
Authorization¶
Before you can use the program, you will have to complete the OAuth procedure with Amazon. There is a fast and simple way and a secure way.
Simple (Appspot)¶
You will not have to prepare anything to initiate this authorization method, just
run, for example, acd_cli init
.
A browser (tab) will open and you will be asked to log into your Amazon account
or grant access for ‘acd_cli_oa’.
Signing in or clicking on ‘Continue’ will download a JSON file named oauth_data
, which must be
placed in the cache directory displayed on screen (e.g. /home/<USER>/.cache/acd_cli
).
You may view the source code of the Appspot app that is used to handle the server part of the OAuth procedure at https://tensile-runway-92512.appspot.com/src.
Advanced Users (Security Profile)¶
You must create a security profile and have it whitelisted. Have a look at Amazon’s
ACD getting started guide.
Select all permissions for your security profile and add a redirect URL to http://localhost
.
Put your own security profile data in a file called client_data
in the cache directory
and have it adhere to the following form.
{
"CLIENT_ID": "amzn1.application-oa2-client.0123456789abcdef0123456789abcdef",
"CLIENT_SECRET": "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef"
}
You may now run acd_cli -v init
.
The authentication procedure is similar to the one above. A browser (tab) will be
opened and you will be asked to log in. Unless you have a local webserver running on port 80,
you will be redirected to your browser’s error page. Just copy the URL
(e.g. http://localhost/?code=AbCdEfGhIjKlMnOpQrSt&scope=clouddrive%3Aread_all+clouddrive%3Awrite
)
into the console.
Changing authorization methods¶
If you want to change between authorization methods, go to your cache path (it is stated in the
output of acd_cli -v init
) and delete the file oauth_data
and, if it exists, client_data
.
Usage¶
acd_cli may be invoked as acd_cli
or acdcli
.
Most actions need the node cache to be initialized and up-to-date, so please run a sync. A sync will fetch the changes since the last sync or the full node list if the cache is empty.
The following actions are built in
sync (s) refresh node list cache; necessary for many actions
clear-cache (cc) clear node cache [offline operation]
tree (t) print directory tree [offline operation]
children (ls) list a folder's children [offline operation]
find (f) find nodes by name [offline operation] [case insensitive]
find-md5 (fm) find files by MD5 hash [offline operation]
find-regex (fr) find nodes by regular expression [offline operation] [case insensitive]
upload (ul) file and directory upload to a remote destination
overwrite (ov) overwrite file A [remote] with content of file B [local]
stream (st) upload the standard input stream to a file
download (dl) download a remote folder or file; will skip existing local files
cat output a file to the standard output stream
create (c, mkdir) create folder using an absolute path
list-trash (lt) list trashed nodes [offline operation]
trash (rm) move node to trash
restore (re) restore node from trash
move (mv) move node A into folder B
rename (rn) rename a node
resolve (rs) resolve a path to a node ID [offline operation]
usage (u) show drive usage data
quota (q) show drive quota [raw JSON]
metadata (m) print a node's metadata [raw JSON]
mount mount the cloud drive at a local directory
umount unmount cloud drive(s)
Please run acd_cli --help
to get a current list of the available actions. A list of further
arguments of an action and their order can be printed by calling acd_cli [action] --help
.
Most node arguments may be specified as a 22 character ID or a UNIX-style path. Trashed nodes’ paths might not be able to be resolved correctly; use their ID instead.
There are more detailed instructions for file transfer actions, find actions and FUSE documentation.
Logs will automatically be saved into the cache directory.
Global Flags/Parameters¶
--verbose
(-v
) and --debug
(-d
) will print additional messages to standard error.
--no-log
(-nl
) will disable the automatic logging feature that saves log files to the
cache directory.
--color
will set the coloring mode according to the specified argument (auto
, never
or always
). Coloring is turned off by default; it is used for file/folder listings.
--check
(-c
) sets the start-up database integrity check mode. The default is to perform a
full
check. Setting the check to quick
or none
may speed up the initialization for
large databases.
--utf
(-u
) will force the output to be encoded in UTF-8, regardless
of the system’s settings.
Exit Status¶
When the script is done running, its exit status can be checked for flags. If no error occurs, the exit status will be 0. Possible flag values are:
flag | value |
---|---|
general error | 1 |
argument error | 2 |
failed file transfer | 8 |
upload timeout | 16 |
hash mismatch | 32 |
error creating folder | 64 |
file size mismatch | 128 |
cache outdated | 256 |
remote duplicate | 512 |
duplicate inode | 1024 |
name collision | 2048 |
error deleting source file | 4096 |
If multiple errors occur, their values will be compounded by a binary OR operation.
File transfer¶
acd_cli offers multi-file transfer actions - upload and download - and single-file transfer actions - overwrite, stream and cat.
Multi-file transfers can be done with concurrent connections by specifying the argument -x NUM
.
If remote folder hierarchies or local directory hierarchies need to be created, this will be done
prior to the file transfers.
Actions¶
upload
¶
The upload action will upload files or recursively upload directories. Existing files will not be changed, normally.
Syntax:
acdcli upload /local/path [/local/next_path [...]] /remote/path
If the --overwrite
(-o
) argument is specified, a remote file will be updated if
a) the local file’s modification time is higher or
b) the local file’s creation time is higher and the file size is different.
The --force
(-f
) argument can be used to force overwrite.
Hint
When uploading large files (>10GiB), a warning about a timeout may be displayed. You then need to wait a few minutes, sync and manually check if the file was uploaded correctly.
overwrite
¶
The upload action overwrites the content of a remote file with a local file.
Syntax:
acdcli overwrite /local/path /remote/path
download
¶
The download action can download a single file or recursively download a directory. If a file already exists locally, it will not be overwritten.
Syntax:
acdcli download /remote/path [/local/path]
If the local path is omitted, the destination path will be the current working directory.
stream
¶
This action will upload the standard input stream to a file.
Syntax:
some_process | acdcli stream file_name /remote/path
If the --overwrite
(-o
) argument is specified, the remote file will be overwritten if
it exists.
cat
¶
This action outputs the content of a file to standard output.
Abort/Resume¶
Incomplete file downloads will be resumed automatically. Aborted file uploads are not resumable at the moment.
Folder or directory hierarchies that were created for a transfer do not need to be recreated when resuming a transfer.
Retry¶
Failed upload, download and overwrite actions allow retries on error
by specifying the --max-retries|-r
argument, e.g. acd_cli <ACTION> -r MAX_RETRIES
.
Exclusion¶
Files may be excluded from upload or download by regex on their name or by file ending. Additionally, paths can be excluded from upload. Regexes and file endings are case-insensitive.
It is possible to specify multiple exclusion arguments of the same kind.
Deduplication¶
Server-side deduplication prevents completely uploaded files from being saved as a node if another file with the same MD5 checksum already exists. acd_cli can prevent uploading duplicates by checking local files’ sizes and MD5s. Empty files are never regarded duplicates.
Finding nodes¶
The find actions will search for normal (active) and trashed nodes and list them.
find¶
The find action will perform a case-insensitive search for files and folders that include the
name or name segment given as argument, so e.g. acdcli find foo
will find “foo” , “Foobar”, etc.
find-md5¶
find-md5 will search for files that match the MD5 hash given. The location of a local file may be determined like so:
acdcli find-md5 `md5sum local/file | cut -d" " -f1`
FUSE module¶
Status¶
The FUSE support is still in its early stage and may be (prone to bugs). acd_cli’s FUSE module has the following filesystem features implemented:
Feature | Working |
---|---|
Basic operations | |
List directory | ✓ |
Read | ✓ |
Write | ✓ [1] |
Rename | ✓ |
Move | ✓ |
Trashing | ✓ |
OS-level trashing | ✓ [2] |
View trash | ❌ |
Misc | |
Automatic sync | ✓ |
ctime/mtime update | ❌ |
Custom permissions | ❌ |
Hard links | partially [3] |
Symbolic links | ❌ [4] |
[1] | partial writes are not possible (i.e. writes at random offsets) |
[2] | restoring might not work |
[3] | manually created hard links will be displayed, but it is discouraged to use them |
[4] | soft links are not part of the ACD API |
Usage¶
The command to mount the (root of the) cloud drive to the empty directory path/to/mountpoint
is
acd_cli mount path/to/mountpoint
A cloud drive folder may be mounted similarly, by
acd_cli mount --modules="subdir,subdir=/folder" path/to/mountpoint
Unmounting is usually achieved by the following command
fusermount -u path/to/mountpoint
If the mount is busy, Linux users can use the --lazy
(-z
) flag.
There exists a convenience action acd_cli umount
that unmounts all ACDFuse mounts on
Linux and Mac OS.
Mount options¶
For further information on the most of the options below, see your mount.fuse man page.
To convert the node’s standard character set (UTF-8) to the system locale, the modules argument
may be used, e.g. --modules="iconv,to_code=CHARSET"
.
--allow-other, -ao | |
allow all users to access the mountpoint (may need extra configuration) | |
--allow-root, -ar | |
allow the root user to access the mountpoint (may need extra configuration) | |
--foreground, -fg | |
do not detach process until filesystem is destroyed (blocks) | |
--interval INT, -i INT | |
set the node cache sync (refresh) interval to INT seconds | |
--nlinks, -n | calculate the number of links for folders (slower) |
--nonempty, -ne | |
allow mounting to a non-empty mount point | |
--read-only, -ro | |
disallow write operations (does not affect cache refresh) | |
--single-threaded, -st | |
disallow multi-threaded FUSE operations |
Automatic remount¶
Linux users may use the systemd service file from the assets directory
to have the clouddrive automatically remounted on login.
Alternative ways are to add a crontab entry using the @reboot
keyword or to add an
fstab entry like so:
acdmount /mount/point fuse defaults 0 0
For this to work, an executable shell script /usr/bin/acdmount must be created
#!/bin/bash
acd_cli mount $1
Please make sure your network connection is up before these commands are executed or the mount will fail.
Library Path¶
If you want or need to override the standard libfuse path, you may set the environment variable LIBFUSE_PATH to the full path of libfuse, e.g.
export LIBFUSE_PATH="/lib/x86_64-linux-gnu/libfuse.so.2"
This is particularly helpful if the libfuse library is properly installed, but not found.
Deleting Nodes¶
“Deleting” directories or files from the file system will only trash them in your cloud drive. Calling rmdir on a directory will always move it into the trash, even if it is not empty.
Logging¶
For debugging purposes, the recommended command to run is
acd_cli -d -nl mount -i0 -fg path/to/mountpoint
That command will disable the automatic refresh (i.e. sync) of the node cache (-i0) and disable detaching from the console.
Contributing guidelines¶
Using the Issue Tracker¶
The issue tracker is not a forum! This does not mean there is no need for good etiquette, but that you should not post unnecessary information. Each reply will cause a notification to be sent to all of the issue’s participants and some of them might consider it spam.
For minor corrections or additions, try to update your posts rather than writing a new reply. Use strike-through markdown for corrections and put updates at the bottom of your original post.
+1ing an issue or “me, too” replies will not get anything done faster.
Adding Issues¶
If you have a question, please read the documentation and search the issue tracker. If you still have a question, please consider using the Gitter chat or sending an e-mail to acd_cli@mail.com instead of opening an issue.
If you absolutely must open an issue, check that you are using the latest master commit and there is no existing issue that fits your problem (including closed and unresolved issues). Try to reproduce the issue on another machine or ideally on another operating system, if possible.
Please provide as much possibly relevant information as you can. This should at least contain:
- your operating system and Python version, e.g. as determined by
python3 -c 'import platform as p; print("%s\n%s" % (p.python_version(), p.platform()))'
- the command/s you used
- what happened
- what you think should have happened instead (and maybe give a reason)
You might find the --verbose
and, to a lesser extent, --debug
flags helpful.
Use code block markup for console output, log messages, etc.
Code¶
There are no real programming guidelines as of yet. Please use function annotations for typing like specified in PEP 3107 and, to stay 3.2-compliant, stringified PEP 484 type hints where appropriate. The limit on line length is 100 characters.
It is a generally a good idea to explicitly announce that you are working on an issue.
Please squash your commits and add yourself to the contributors list before making a pull request.
Have a look at Github’s general guide how to contribute. It is not necessary to create a feature branch, i.e. you may commit to the master branch.
There is also a TODO list of some of the open tasks.
Donations¶
You might also want to consider making a donation to further the development of acd_cli.
Contributors¶
Thanks to
- chrisidefix for adding the find-md5 action and forcing me to create a proper package and use PyPI
- msh100 for adding proxy documentation and updating the oauth scope
- hansendc for revamping the usage report
- legnaleurc for adding the find-regex action
- Timdawson264 for fixing st_nlinks in the FUSE node stat
- Lorentz83 for creating a bash completion script
- kylemanna for adding a systemd service file
Also thanks to
- fibersnet for pointing out a possible deadlock in ACDFuse.
- and everyone else who I forgot to mention
Frequently Asked Questions¶
Why Did I Get an UnicodeEncodeError?¶
If you encounter Unicode problems, check that your locale is set correctly.
Alternatively, you may use the --utf
argument to force acd_cli to use UTF-8 output encoding
regardless of your console’s current encoding.
Windows users may import the provided reg file (assets/win_codepage.reg), tested with Windows 8.1, to set the command line interface encoding to cp65001.
What Is acd_cli’s Installation Path?¶
On unixoid operating systems the acd_cli script may be located by running which acd_cli
or, if that does not yield a result, by executing pip3 show -f acdcli
.
Where Does acd_cli Store its Cache and Settings?¶
You can see which paths are used in the log output of acd_cli -v init
.
How Do I Pass a Node ID Starting with -
(dash/minus/hyphen)?¶
Precede the node ID by two minuses and a space to have it be interpreted as parameter
and not as an argument, e.g. -- -AbCdEfGhIjKlMnOpQr012
.
Do Transfer Speeds Vary Depending on Geolocation?¶
Amazon may be throttling users not located in the U.S. To quote the Terms of Use,
The Service is offered in the United States. We may restrict access from other locations. There may be limits on the types of content you can store and share using the Service, such as file types we don’t support, and on the number or type of devices you can use to access the Service. We may impose other restrictions on use of the Service.
Ancient History¶
0.1.3¶
- plugin mechanism added
- OAuth now via Appspot; security profile no longer necessary
- back-off algorithm for API requests implemented
0.1.2¶
- new:
- overwriting of files
- recursive upload/download
- hashing of downloaded files
- clear-cache action
- fixes:
- remove-child accepted status code
- fix for upload of files with Unicode characters
- other:
- changed database schema
Development¶
Contents:
acdcli package¶
Subpackages¶
acdcli.api package¶
Submodules¶
acdcli.api.account module¶
ACD account information
acdcli.api.backoff_req module¶
-
class
acdcli.api.backoff_req.
BackOffRequest
(auth_callback: 'requests.auth.AuthBase')[source]¶ Bases:
object
Wrapper for requests that implements timed back-off algorithm https://developer.amazon.com/public/apis/experience/cloud-drive/content/best-practices Caution: this catches all connection errors and may stall for a long time. It is necessary to init this module before use.
-
__init__
(auth_callback: 'requests.auth.AuthBase')[source]¶ Parameters: auth_callback – callable object that attaches auth info to a request
-
-
acdcli.api.backoff_req.
CONN_TIMEOUT
= 30¶ timeout for establishing a connection
-
acdcli.api.backoff_req.
IDLE_TIMEOUT
= 60¶ read timeout
-
acdcli.api.backoff_req.
REQUESTS_TIMEOUT
= (30, 60)¶ http://docs.python-requests.org/en/latest/user/advanced/#timeouts
acdcli.api.client module¶
-
class
acdcli.api.client.
ACDClient
(path='')[source]¶ Bases:
acdcli.api.account.AccountMixin
,acdcli.api.content.ContentMixin
,acdcli.api.metadata.MetadataMixin
,acdcli.api.trash.TrashMixin
Provides a client to the Amazon Cloud Drive RESTful interface.
-
content_url
¶
-
metadata_url
¶
-
-
acdcli.api.client.
ENDPOINT_VAL_TIME
= 259200¶ number of seconds for endpoint validity (3 days)
acdcli.api.common module¶
-
exception
acdcli.api.common.
RequestError
(status_code: int, msg: str)[source]¶ Bases:
Exception
Catch-all exception class for various connection and ACD server errors.
-
class
CODE
[source]¶ Bases:
object
-
CONN_EXCEPTION
= 1000¶
-
FAILED_SUBREQUEST
= 1002¶
-
INCOMPLETE_RESULT
= 1003¶
-
INVALID_TOKEN
= 1005¶
-
REFRESH_FAILED
= 1004¶
-
-
RequestError.
codes
= <lookup 'status_codes'>¶
-
class
acdcli.api.content module¶
-
acdcli.api.content.
CHUNK_MAX_RETRY
= 5¶ retry limit for failed chunk
-
acdcli.api.content.
CHUNK_SIZE
= 524288000¶ download chunk size
-
class
acdcli.api.content.
ContentMixin
[source]¶ Bases:
object
Implements content portion of the ACD API.
-
chunked_download
(*args, **kwargs)¶
-
clear_file
(node_id: str) → dict[source]¶ Clears a file’s content by overwriting it with an empty BytesIO.
Parameters: node_id – valid file node ID
-
download_chunk
(node_id: str, offset: int, length: int, **kwargs) → bytearray[source]¶ Load a file chunk into memory.
Parameters: length – the length of the download chunk
-
download_file
(node_id: str, basename: str, dirname: str=None, **kwargs)[source]¶ Deals with download preparation, download with
chunked_download()
and finish. Calls callbacks while fast forwarding through incomplete file (if existent). Will not check for existing file prior to download and overwrite existing file on finish.Parameters: - dirname – a valid local directory name, or cwd if None
- basename – a valid file name
- kwargs –
- length: the total length of the file
- write_callbacks (list[function]): passed on to
chunked_download()
- resume (bool=True): whether to resume if partial file exists
-
download_thumbnail
(node_id: str, file_name: str, max_dim=128)[source]¶ Download a movie’s or picture’s thumbnail into a file. Officially supports the image formats JPEG, BMP, PNG, TIFF, some RAW formats and the video formats MP4, QuickTime, AVI, MTS, MPEG, ASF, WMV, FLV, OGG. See http://www.amazon.com/gp/help/customer/display.html?nodeId=201634590 Additionally supports MKV.
Parameters: max_dim – maximum width or height of the resized image/video thumbnail
-
overwrite_file
(node_id: str, file_name: str, read_callbacks: list=None, deduplication=False) → dict[source]¶
-
overwrite_stream
(stream, node_id: str, read_callbacks: list=None) → dict[source]¶ Overwrite content of node with ID node_id with content of stream.
Parameters: stream – readable object
-
response_chunk
(node_id: str, offset: int, length: int, **kwargs) → requests.models.Response[source]¶
-
-
acdcli.api.content.
FS_RW_CHUNK_SZ
= 131072¶ basic chunk size for file system r/w operations
-
acdcli.api.content.
PARTIAL_SUFFIX
= '.__incomplete'¶ suffix (file ending) for incomplete files
acdcli.api.metadata module¶
Node metadata operations
-
acdcli.api.metadata.
ChangeSet
¶ alias of
Changes
-
class
acdcli.api.metadata.
MetadataMixin
[source]¶ Bases:
object
-
add_child
(parent_id: str, child_id: str) → dict[source]¶ Adds node with ID child_id to folder with ID parent_id.
Returns: updated child node dict
-
add_property
(node_id: str, owner_id: str, key: str, value: str) → dict[source]¶ Adds or overwrites key property with content. Maximum number of keys per owner is 10.
Parameters: value – string of length <= 500 Raises: RequestError: 404, <UnknownOperationException/> if owner is empty RequestError: 400, {...} if maximum of allowed properties is reached Returns dict: {‘key’: ‘<KEY>’, ‘location’: ‘<NODE_ADDRESS>/properties/<OWNER_ID/<KEY>’, ‘value’: ‘<VALUE>’}
-
delete_properties
(node_id: str, owner_id: str)[source]¶ Deletes all of the owner’s properties. Uses multiple requests.
-
delete_property
(node_id: str, owner_id: str, key: str)[source]¶ Deletes key property from node with ID node_id.
-
get_changes
(checkpoint='', include_purged=False) → 'Generator[ChangeSet]'[source]¶ Generates a ChangeSet for each checkpoint in changes response. See https://developer.amazon.com/public/apis/experience/cloud-drive/content/changes.
-
get_owner_id
()[source]¶ Provisional function for retrieving the security profile’s name, a.k.a. owner id.
-
list_properties
(node_id: str, owner_id: str) → dict[source]¶ This will always return an empty dict if the accessor is not the owner. :param _sphinx_paramlinks_acdcli.api.metadata.MetadataMixin.list_properties.owner_id: owner ID (return status 404 if empty)
-
acdcli.api.oauth module¶
-
class
acdcli.api.oauth.
AppspotOAuthHandler
(path)[source]¶ Bases:
acdcli.api.oauth.OAuthHandler
-
APPSPOT_URL
= 'https://tensile-runway-92512.appspot.com/'¶
-
-
class
acdcli.api.oauth.
LocalOAuthHandler
(path)[source]¶ Bases:
acdcli.api.oauth.OAuthHandler
A local OAuth handler that works with a whitelisted security profile. The profile must not be created prior to June 2015. Profiles created prior to this month are not able to use the new scope “clouddrive:read_all” that replaces “clouddrive:read”. https://developer.amazon.com/public/apis/experience/cloud-drive/content/getting-started
-
AMAZON_OA_LOGIN_URL
= 'https://amazon.com/ap/oa'¶
-
AMAZON_OA_TOKEN_URL
= 'https://api.amazon.com/auth/o2/token'¶
-
CLIENT_DATA_FILE
= 'client_data'¶
-
REDIRECT_URI
= 'http://localhost'¶
-
-
class
acdcli.api.oauth.
OAuthHandler
(path)[source]¶ Bases:
requests.auth.AuthBase
-
class
KEYS
[source]¶ Bases:
object
-
ACC_TOKEN
= 'access_token'¶
-
EXP_IN
= 'expires_in'¶
-
EXP_TIME
= 'exp_time'¶
-
REDIRECT_URI
= 'redirect_uri'¶
-
REFR_TOKEN
= 'refresh_token'¶
-
-
OAuthHandler.
OAUTH_DATA_FILE
= 'oauth_data'¶
-
OAuthHandler.
check_oauth_file_exists
()[source]¶ Checks for OAuth file existence and one-time initialize if necessary. Throws on error.
-
OAuthHandler.
exp_time
¶
-
OAuthHandler.
get_access_token_info
() → dict[source]¶ Returns: int exp: expiration time in sec, str aud: client id user_id, app_id, iat (exp time)
-
OAuthHandler.
get_auth_token
(reload=True) → str[source]¶ Gets current access token, refreshes if necessary.
Parameters: reload – whether the oauth token file should be reloaded (external update)
-
OAuthHandler.
load_oauth_data
()[source]¶ Loads oauth data file, validate and add expiration time if necessary
-
OAuthHandler.
treat_auth_token
(time_: float)[source]¶ Adds expiration time to member OAuth dict using specified begin time.
-
class
acdcli.api.trash module¶
Node trashing and restoration. https://developer.amazon.com/public/apis/experience/cloud-drive/content/trash
Module contents¶
from api import client
acd_client = client.ACDClient()
root = acd_client.get_root_id()
children = acd_client.list_children(root)
for child in children:
print(child['name'])
# ...
This is the usual node JSON format for a file:
{
'contentProperties': {'contentType': 'text/plain',
'extension': 'txt',
'md5': 'd41d8cd98f00b204e9800998ecf8427e',
'size': 0,
'version': 1},
'createdBy': '<security-profile-nm>-<user>',
'createdDate': '2015-01-01T00:00:00.00Z',
'description': '',
'eTagResponse': 'AbCdEfGhI01',
'id': 'AbCdEfGhIjKlMnOpQr0123',
'isShared': False,
'kind': 'FILE',
'labels': [],
'modifiedDate': '2015-01-01T00:00:00.000Z',
'name': 'empty.txt',
'parents': ['0123AbCdEfGhIjKlMnOpQr'],
'restricted': False,
'status': 'AVAILABLE',
'version': 1
}
The modifiedDate
and version
keys get updated each time the content or metadata is updated.
contentProperties['version']
gets updated on overwrite.
A folder’s JSON looks similar, but it lacks the contentProperties
dictionary.
isShared
is set to False
even when a node is actually shared.
Caution
ACD allows hard links for folders!
acdcli.bundled package¶
Submodules¶
acdcli.bundled.encoder module¶
This holds all of the implementation details of the MultipartEncoder
-
class
acdcli.bundled.encoder.
CustomBytesIO
(buffer=None, encoding='utf-8')[source]¶ Bases:
_io.BytesIO
-
len
¶
-
-
class
acdcli.bundled.encoder.
MultipartEncoder
(fields, boundary=None, encoding='utf-8')[source]¶ Bases:
object
The
MultipartEncoder
object is a generic interface to the engine that will create amultipart/form-data
body for you.The basic usage is:
import requests from requests_toolbelt import MultipartEncoder encoder = MultipartEncoder({'field': 'value', 'other_field', 'other_value'}) r = requests.post('https://httpbin.org/post', data=encoder, headers={'Content-Type': encoder.content_type})
If you do not need to take advantage of streaming the post body, you can also do:
r = requests.post('https://httpbin.org/post', data=encoder.to_string(), headers={'Content-Type': encoder.content_type})
If you want the encoder to use a specific order, you can use an OrderedDict or more simply, a list of tuples:
encoder = MultipartEncoder([('field', 'value'), ('other_field', 'other_value')])
Changed in version 0.4.0.
You can also provide tuples as part values as you would provide them to requests’
files
parameter.encoder = MultipartEncoder({ 'field': ('file_name', b'{"a": "b"}', 'application/json', {'X-My-Header': 'my-value'}) ])
Warning
This object will end up directly in
httplib
. Currently,httplib
has a hard-coded read size of 8192 bytes. This means that it will loop until the file has been read and your upload could take a while. This is not a bug in requests. A feature is being considered for this object to allow you, the user, to specify what size should be returned on a read. If you have opinions on this, please weigh in on this issue.-
__init__
(fields, boundary=None, encoding='utf-8')¶
-
boundary_value
= None¶ Boundary value either passed in by the user or created
-
content_type
¶
-
encoding
= None¶ Encoding of the data being passed in
-
fields
= None¶ Fields provided by the user
-
finished
= None¶ Whether or not the encoder is finished
-
len
¶ Length of the multipart/form-data body.
requests will first attempt to get the length of the body by calling
len(body)
and then by checking for thelen
attribute.On 32-bit systems, the
__len__
method cannot return anything larger than an integer (in C) can hold. If the total size of the body is even slightly larger than 4GB users will see an OverflowError. This manifested itself in bug #80.As such, we now calculate the length lazily as a property.
-
parts
= None¶ Pre-computed parts of the upload
-
read
(size=-1)¶ Read data from the streaming encoder.
Parameters: size (int) – (optional), If provided, read
will return exactly that many bytes. If it is not provided, it will return the remaining bytes.Returns: bytes
-
to_string
()¶
-
-
class
acdcli.bundled.encoder.
MultipartEncoderMonitor
(encoder, callback=None)[source]¶ Bases:
object
An object used to monitor the progress of a
MultipartEncoder
.The
MultipartEncoder
should only be responsible for preparing and streaming the data. For anyone who wishes to monitor it, they shouldn’t be using that instance to manage that as well. Using this class, they can monitor an encoder and register a callback. The callback receives the instance of the monitor.To use this monitor, you construct your
MultipartEncoder
as you normally would.from requests_toolbelt import (MultipartEncoder, MultipartEncoderMonitor) import requests def callback(encoder, bytes_read): # Do something with this information pass m = MultipartEncoder(fields={'field0': 'value0'}) monitor = MultipartEncoderMonitor(m, callback) headers = {'Content-Type': montior.content_type} r = requests.post('https://httpbin.org/post', data=monitor, headers=headers)
Alternatively, if your use case is very simple, you can use the following pattern.
from requests_toolbelt import MultipartEncoderMonitor import requests def callback(encoder, bytes_read): # Do something with this information pass monitor = MultipartEncoderMonitor.from_fields( fields={'field0': 'value0'}, callback ) headers = {'Content-Type': montior.content_type} r = requests.post('https://httpbin.org/post', data=monitor, headers=headers)
-
__init__
(encoder, callback=None)¶
-
bytes_read
= None¶ Number of bytes already read from the
MultipartEncoder
instance
-
callback
= None¶ Optionally function to call after a read
-
content_type
¶
-
encoder
= None¶ Instance of the
MultipartEncoder
being monitored
-
classmethod
from_fields
(fields, boundary=None, encoding='utf-8', callback=None)¶
-
len
= None¶ Avoid the same problem in bug #80
-
read
(size=-1)¶
-
to_string
()¶
-
-
class
acdcli.bundled.encoder.
Part
(headers, body)[source]¶ Bases:
object
-
bytes_left_to_write
()[source]¶ Determine if there are bytes left to write.
Returns: bool – True
if there are bytes left to write, otherwiseFalse
-
classmethod
from_field
(field, encoding)[source]¶ Create a part from a Request Field generated by urllib3.
-
write_to
(buffer, size)[source]¶ Write the requested amount of bytes to the buffer provided.
The number of bytes written may exceed size on the first read since we load the headers ambitiously.
Parameters: - buffer (CustomBytesIO) – buffer we want to write bytes to
- size (int) – number of bytes requested to be written to the buffer
Returns: int – number of bytes actually written
-
-
acdcli.bundled.encoder.
coerce_data
(data, encoding)[source]¶ Ensure that every object’s __len__ behaves uniformly.
-
acdcli.bundled.encoder.
encode_with
(string, encoding)[source]¶ Encoding
string
withencoding
if necessary.Parameters: Returns: encoded bytes object
-
acdcli.bundled.encoder.
readable_data
(data, encoding)[source]¶ Coerce the data to an object with a
read
method.
acdcli.bundled.fuse module¶
Module contents¶
acdcli.cache package¶
Submodules¶
acdcli.cache.cursors module¶
Cursor context managers
acdcli.cache.db module¶
-
class
acdcli.cache.db.
NodeCache
(path: str='', check=0)[source]¶ Bases:
acdcli.cache.schema.SchemaMixin
,acdcli.cache.query.QueryMixin
,acdcli.cache.sync.SyncMixin
,acdcli.cache.format.FormatterMixin
-
IntegrityCheckType
= {'quick': 1, 'none': 2, 'full': 0}¶ types of SQLite integrity checks
-
integrity_check
(type_: {'quick': 1, 'none': 2, 'full': 0})[source]¶ Performs a self-integrity check on the database.
-
acdcli.cache.format module¶
Formatters for query Bundle iterables. Capable of ANSI-type coloring using colors defined in
LS_COLORS
.
-
class
acdcli.cache.format.
FormatterMixin
[source]¶ Bases:
object
-
acdcli.cache.format.
color_file
(name: str) → str[source]¶ Colorizes a file name according to its file ending.
-
acdcli.cache.format.
color_status
(status)[source]¶ Creates a colored one-character status abbreviation.
acdcli.cache.query module¶
-
class
acdcli.cache.query.
Node
(row)[source]¶ Bases:
object
-
created
¶
-
is_available
¶
-
is_file
¶
-
is_folder
¶
-
is_trashed
¶
-
modified
¶
-
simple_name
¶
-
acdcli.cache.schema module¶
acdcli.cache.sync module¶
Syncs Amazon Node API objects with SQLite database.
-
class
acdcli.cache.sync.
SyncMixin
[source]¶ Bases:
object
Sync mixin to the
NodeCache
-
insert_folders
(folders: list)[source]¶ Inserts list of folders into cache. Sets ‘update’ column to current date.
Parameters: folders – list of raw dict-type folders
-
Module contents¶
acdcli.plugins package¶
Submodules¶
acdcli.plugins.template module¶
This is a template that you can use for adding custom plugins.
-
class
acdcli.plugins.template.
TestPlugin
[source]¶ Bases:
acdcli.plugins.Plugin
-
MIN_VERSION
= '0.3.1'¶
-
classmethod
action
(args: argparse.Namespace) → int[source]¶ This is where the magic happens. Return a zero for success, a non-zero int for failure.
-
classmethod
attach
(subparsers: argparse.ArgumentParser, log: list, **kwargs)[source]¶ Attaches this plugin to the top-level argparse subparser group :param subparsers the action subparser group :param log a list to put initialization log messages in
-
registry
= {<class 'acdcli.plugins.template.TestPlugin'>}¶
-
acdcli.utils package¶
Submodules¶
acdcli.utils.hashing module¶
acdcli.utils.progress module¶
-
class
acdcli.utils.progress.
FileProgress
(total_sz: int, current: int=0)[source]¶ Bases:
object
-
current
¶
-
status
¶
-
total
¶
-
acdcli.utils.threading module¶
-
class
acdcli.utils.threading.
QueuedLoader
(workers=1, print_progress=True, max_retries=0)[source]¶ Bases:
object
Multi-threaded loader intended for file transfer jobs.
-
MAX_NUM_WORKERS
= 8¶
-
MAX_RETRIES
= 4¶
-
REFRESH_PROGRESS_INT
= 0.3¶
-
acdcli.utils.time module¶
Module contents¶
Submodules¶
acdcli.acd_fuse module¶
Module contents¶
acdcli¶
TODO¶
General / API¶
- switch to multiprocessing (?)
- metalink support (?)
API¶
- support of node labels
- support for assets (?)
- favorite support (feature not yet announced officially)
- rip out the Appspot authentication handler
- fix upload of 0-byte streams
CLI¶
- unify the find action
- check symlink behavior for different Python versions (#95)
FUSE¶
- invalidate chunks of StreamedResponseCache (implement a time-out)
- respect flags when opening files
- use a filesystem test suite
File Transfer¶
- more sophisticated progress handler that supports offsets
- copy local mtime on upload (#58)
- add path exclusion by argument for download
User experience¶
- shell completion for remote directories (#127)
- even nicer help formatting
- log coloring
Tests¶
- cache methods
- more functional tests
- fuse module
Documentation¶
- write how-to on packaging plugins (sample setup.py)
Overview¶
acd_cli provides a command line interface to Amazon Cloud Drive and allows mounting your cloud drive using FUSE for read and write access. It is currently in beta stage.
Node Cache Features¶
- caching of local node metadata in an SQLite database
- addressing of remote nodes via a pathname (e.g.
/Photos/kitten.jpg
) - file search
CLI Features¶
- tree or flat listing of files and folders
- simultaneous uploads/downloads, retry on error
- basic plugin support
File Operations¶
- upload/download of single files and directories
- streamed upload/download
- folder creation
- trashing/restoring
- moving/renaming nodes
Documentation¶
The full documentation is available at https://acd-cli.readthedocs.org.
Quick Start¶
Have a look at the known issues, then follow the setup guide and authorize. You may then use the program as described in the usage guide.
CLI Usage Example¶
In this example, a two-level folder hierarchy is created in an empty cloud drive.
Then, a relative local path local/spam
is uploaded recursively using two connections.
$ acd_cli sync
Syncing...
Done.
$ acd_cli ls /
[PHwiEv53QOKoGFGqYNl8pw] [A] /
$ acd_cli mkdir /egg/
$ acd_cli mkdir /egg/bacon/
$ acd_cli upload -x 2 local/spam/ /egg/bacon/
[################################] 100.0% of 100MiB 12/12 654.4KB/s
$ acd_cli tree
/
egg/
bacon/
spam/
sausage
spam
[...]
The standard node listing format includes the node ID, the first letter of its status and its full path. Possible statuses are “AVAILABLE” and “TRASH”.
Known Issues¶
It is not possible to upload files using Python 3.2.3, 3.3.0 and 3.3.1 due to a bug in the http.client module.
API Restrictions¶
- the current upload file size limit is 50GiB
- uploads of large files >10 GiB may be successful, yet a timeout error is displayed (please check the upload by syncing manually)
- storage of node names is case-preserving, but not case-sensitive (this should not concern Apple users)
- it is not possible to share or delete files
Contribute¶
Have a look at the contributing guidelines.
Recent Changes¶
0.3.1¶
- general improvements for FUSE
- FUSE write support added
- added automatic logging
- sphinx documentation added
0.3.0¶
- FUSE read support added
0.2.2¶
- sync speed-up
- node listing format changed
- optional node listing coloring added (for Linux or via LS_COLORS)
- re-added possibility for local OAuth
0.2.1¶
- curl dependency removed
- added job queue, simultaneous transfers
- retry on error
0.2.0¶
- setuptools support
- workaround for download of files larger than 10 GiB
- automatic resuming of downloads