Skip to content

Commit

Permalink
[DATALAD RUNCMD] run codespell throughout
Browse files Browse the repository at this point in the history
=== Do not change lines below ===
{
 "chain": [],
 "cmd": "codespell -w",
 "exit": 0,
 "extra_inputs": [],
 "inputs": [],
 "outputs": [],
 "pwd": "."
}
^^^ Do not change lines above ^^^
  • Loading branch information
yarikoptic committed Apr 25, 2023
1 parent b37aa20 commit fda267d
Show file tree
Hide file tree
Showing 11 changed files with 18 additions and 18 deletions.
2 changes: 1 addition & 1 deletion NEWS
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ s3cmd-2.2.0 - 2021-09-27
* Fixed getbucketinfo that was broken when the bucket lifecycle uses the filter element (Liu Lan)
* Fixed RestoreRequest XML namespace URL (#1203) (Akete)
* Fixed PARTIAL exit code that was not properly set when needed for object_get (#1190)
* Fixed a possible inifinite loop when a file is truncated during hashsum or upload (#1125) (Matthew Krokosz, Florent Viard)
* Fixed a possible infinite loop when a file is truncated during hashsum or upload (#1125) (Matthew Krokosz, Florent Viard)
* Fixed report_exception wrong error when LANG env var was not set (#1113)
* Fixed wrong wiki url in error messages (Alec Barrett)
* Py3: Fixed an AttributeError when using the "files-from" option
Expand Down
10 changes: 5 additions & 5 deletions ObsoleteChangeLog
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,7 @@ No longer keeping ChangeLog up to date, use git log instead!

* S3/S3.py: Fix bucket listing for buckets with
over 1000 prefixes. (contributed by Timothee Groleau)
* S3/S3.py: Fixed code formating.
* S3/S3.py: Fixed code formatting.

2010-05-21 Michal Ludvig <[email protected]>

Expand Down Expand Up @@ -696,7 +696,7 @@ No longer keeping ChangeLog up to date, use git log instead!
2008-11-24 Michal Ludvig <[email protected]>

* S3/Utils.py: Common XML parser.
* s3cmd, S3/Exeptions.py: Print info message on Error.
* s3cmd, S3/Exceptions.py: Print info message on Error.

2008-11-21 Michal Ludvig <[email protected]>

Expand Down Expand Up @@ -733,7 +733,7 @@ No longer keeping ChangeLog up to date, use git log instead!
* s3cmd: Re-raise the right exception.
Merge from 0.9.8.x branch, rel 246:
* s3cmd, S3/S3.py, S3/Exceptions.py: Don't abort 'sync' or 'put' on files
that can't be open (e.g. Permision denied). Print a warning and skip over
that can't be open (e.g. Permission denied). Print a warning and skip over
instead.
Merge from 0.9.8.x branch, rel 245:
* S3/S3.py: Escape parameters in strings. Fixes sync to and
Expand All @@ -754,7 +754,7 @@ No longer keeping ChangeLog up to date, use git log instead!
2008-09-15 Michal Ludvig <[email protected]>

* s3cmd, S3/S3.py, S3/Utils.py, S3/S3Uri.py, S3/Exceptions.py:
Yet anoter Unicode round. Unicodised all command line arguments
Yet another Unicode round. Unicodised all command line arguments
before processing.

2008-09-15 Michal Ludvig <[email protected]>
Expand Down Expand Up @@ -1242,7 +1242,7 @@ No longer keeping ChangeLog up to date, use git log instead!

2007-01-26 Michal Ludvig <[email protected]>

* S3/S3fs.py: Added support for stroing/loading inodes.
* S3/S3fs.py: Added support for storing/loading inodes.
No data yet however.

2007-01-26 Michal Ludvig <[email protected]>
Expand Down
2 changes: 1 addition & 1 deletion S3/BaseUtils.py
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ def dateS3toUnix(date):

def dateRFC822toPython(date):
"""
Convert a string formated like '2020-06-27T15:56:34Z' into a python datetime
Convert a string formatted like '2020-06-27T15:56:34Z' into a python datetime
"""
return dateutil.parser.parse(date, fuzzy=True)
__all__.append("dateRFC822toPython")
Expand Down
2 changes: 1 addition & 1 deletion S3/Config.py
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,7 @@ class Config(object):
reduced_redundancy = False
storage_class = u""
follow_symlinks = False
# If too big, this value can be overriden by the OS socket timeouts max values.
# If too big, this value can be overridden by the OS socket timeouts max values.
# For example, on Linux, a connection attempt will automatically timeout after 120s.
socket_timeout = 300
invalidate_on_cf = False
Expand Down
2 changes: 1 addition & 1 deletion S3/ConnMan.py
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ def match_hostname(self):
def _https_connection(hostname, port=None):
try:
context = http_connection._ssl_context()
# Wilcard certificates do not work with DNS-style named buckets.
# Wildcard certificates do not work with DNS-style named buckets.
bucket_name, success = getBucketFromHostname(hostname)
if success and '.' in bucket_name:
# this merely delays running the hostname check until
Expand Down
2 changes: 1 addition & 1 deletion S3/ExitCodes.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
EX_OK = 0
EX_GENERAL = 1
EX_PARTIAL = 2 # some parts of the command succeeded, while others failed
EX_SERVERMOVED = 10 # 301: Moved permanantly & 307: Moved temp
EX_SERVERMOVED = 10 # 301: Moved permanently & 307: Moved temp
EX_SERVERERROR = 11 # 400, 405, 411, 416, 417, 501: Bad request, 504: Gateway Time-out
EX_NOTFOUND = 12 # 404: Not found
EX_CONFLICT = 13 # 409: Conflict (ex: bucket error)
Expand Down
2 changes: 1 addition & 1 deletion S3/FileLists.py
Original file line number Diff line number Diff line change
Expand Up @@ -410,7 +410,7 @@ def _get_remote_attribs(uri, remote_item):
try:
md5 = response['s3cmd-attrs']['md5']
remote_item.update({'md5': md5})
debug(u"retreived md5=%s from headers" % md5)
debug(u"retrieved md5=%s from headers" % md5)
except KeyError:
pass

Expand Down
2 changes: 1 addition & 1 deletion S3/S3.py
Original file line number Diff line number Diff line change
Expand Up @@ -227,7 +227,7 @@ class S3(object):
)

operations = BidirMap(
UNDFINED = 0x0000,
UNDEFINED = 0x0000,
LIST_ALL_BUCKETS = targets["SERVICE"] | http_methods["GET"],
BUCKET_CREATE = targets["BUCKET"] | http_methods["PUT"],
BUCKET_LIST = targets["BUCKET"] | http_methods["GET"],
Expand Down
4 changes: 2 additions & 2 deletions S3/Utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -299,7 +299,7 @@ def getHostnameFromBucket(bucket):
try:
import pwd
def getpwuid_username(uid):
"""returns a username from the password databse for the given uid"""
"""returns a username from the password database for the given uid"""
return unicodise_s(pwd.getpwuid(uid).pw_name)
except ImportError:
import getpass
Expand All @@ -310,7 +310,7 @@ def getpwuid_username(uid):
try:
import grp
def getgrgid_grpname(gid):
"""returns a groupname from the group databse for the given gid"""
"""returns a groupname from the group database for the given gid"""
return unicodise_s(grp.getgrgid(gid).gr_name)
except ImportError:
def getgrgid_grpname(gid):
Expand Down
4 changes: 2 additions & 2 deletions run-tests.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ def is_exe(fpath):

os.system("tar -xf testsuite/checksum.tar -C testsuite")
if not os.path.isfile('testsuite/checksum/cksum33.txt'):
print("Something went wrong while unpacking testsuite/checkum.tar")
print("Something went wrong while unpacking testsuite/checksum.tar")
sys.exit(1)

## Fix up permissions for permission-denied tests
Expand Down Expand Up @@ -874,7 +874,7 @@ def pbucket(tail):
must_find = [ "Payer: BucketOwner"],
skip_if_profile=['minio'])

## ====== Recursive delete maximum exceeed
## ====== Recursive delete maximum exceed
test_s3cmd("Recursive delete maximum exceeded", ['del', '--recursive', '--max-delete=1', '--exclude', 'Atomic*', '%s/xyz/etc' % pbucket(1)],
must_not_find = [ "delete: '%s/xyz/etc/TypeRa.ttf'" % pbucket(1) ])

Expand Down
4 changes: 2 additions & 2 deletions s3cmd
Original file line number Diff line number Diff line change
Expand Up @@ -2037,7 +2037,7 @@ def cmd_sync_local2remote(args):
if ret == EX_OK:
ret = status
# uploaded_objects_list reference is passed so it can be filled with
# destination object of succcessful copies so that they can be
# destination object of successful copies so that they can be
# invalidated by cf
n_copies, saved_bytes, failed_copy_files = remote_copy(
s3, copy_pairs, destination_base, uploaded_objects_list, True)
Expand Down Expand Up @@ -2881,7 +2881,7 @@ def update_acl(s3, uri, seq_label=""):
else:
acl.grantAnonRead()
something_changed = True
elif cfg.acl_public == False: # we explicitely check for False, because it could be None
elif cfg.acl_public == False: # we explicitly check for False, because it could be None
if not acl.isAnonRead() and not acl.isAnonWrite():
info(u"%s: already Private, skipping %s" % (uri, seq_label))
else:
Expand Down

0 comments on commit fda267d

Please sign in to comment.