Skip to content

Commit

Permalink
Merge commit '2b19c289a9488bfd813541e44745be52df413439' into catchup/…
Browse files Browse the repository at this point in the history
…long_lived_vault_from_main_2b19c289a9488bfd813541e44745be52df413439
  • Loading branch information
Quexington committed Nov 4, 2024
2 parents 4e91ae7 + 2b19c28 commit e327b08
Show file tree
Hide file tree
Showing 172 changed files with 1,053 additions and 977 deletions.
10 changes: 0 additions & 10 deletions .flake8

This file was deleted.

2 changes: 1 addition & 1 deletion .github/workflows/dependency-review.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,5 +21,5 @@ jobs:
- name: "Dependency Review"
uses: actions/dependency-review-action@v4
with:
allow-dependencies-licenses: pkg:pypi/pyinstaller
allow-dependencies-licenses: pkg:pypi/pylint, pkg:pypi/pyinstaller
deny-licenses: AGPL-1.0-only, AGPL-1.0-or-later, AGPL-1.0-or-later, AGPL-3.0-or-later, GPL-1.0-only, GPL-1.0-or-later, GPL-2.0-only, GPL-2.0-or-later, GPL-3.0-only, GPL-3.0-or-later
45 changes: 32 additions & 13 deletions .github/workflows/test-install-scripts.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,34 +22,53 @@ concurrency:

jobs:
test_scripts:
name: Test Install Scripts - ${{ matrix.development.name }} - ${{ matrix.editable.name }}
runs-on: ${{ matrix.os.runs-on }}
name: Native ${{ matrix.os.emoji }} ${{ matrix.arch.name }} ${{ matrix.development.name }} - ${{ matrix.editable.name }}
runs-on: ${{ matrix.os.runs-on[matrix.arch.matrix] }}
strategy:
fail-fast: false
matrix:
python:
- major-dot-minor: "3.10"
os:
- runs-on: macos-latest
matrix: macos-arm
- runs-on: macos-13
matrix: macos-intel
- runs-on: ubuntu-latest
- name: Linux
emoji: 🐧
runs-on:
arm: [Linux, ARM64]
intel: ubuntu-latest
matrix: linux
- runs-on: windows-latest
- name: macOS
emoji: 🍎
runs-on:
arm: macos-latest
intel: macos-13
matrix: macos
- name: Windows
emoji: 🪟
runs-on:
intel: windows-latest
matrix: windows
arch:
- name: ARM
matrix: arm
- name: Intel
matrix: intel
development:
- name: Non-development
- name: Non-dev
value: false
- name: Development
- name: Dev
value: true
editable:
- name: Non-editable
- name: Non-edit
value: false
matrix: non-editable
- name: Editable
- name: Edit
value: true
matrix: editable
exclude:
- os:
matrix: windows
arch:
matrix: arm

steps:
- name: Checkout Code
Expand Down Expand Up @@ -112,7 +131,7 @@ jobs:
[ "$POST_VERSION" != "shooby-doowah" -a "$PRE_VERSION" = "$POST_VERSION" ]
test_scripts_in_docker:
name: Test Install Scripts ${{ matrix.distribution.name }} ${{ matrix.arch.name }}
name: Docker ${{ matrix.distribution.name }} ${{ matrix.arch.name }}
runs-on: ${{ matrix.os.runs-on[matrix.arch.matrix] }}
container: ${{ matrix.distribution.url }}
strategy:
Expand Down
2 changes: 0 additions & 2 deletions .github/workflows/upload-pypi-source.yml
Original file line number Diff line number Diff line change
Expand Up @@ -120,8 +120,6 @@ jobs:
check:
- name: black
command: black --check --diff .
- name: flake8
command: flake8 benchmarks build_scripts chia tools *.py
- name: generated protocol tests
command: |
python3 -m chia._tests.util.build_network_protocol_files
Expand Down
21 changes: 0 additions & 21 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,6 @@ repos:
entry: ./activated.py python chia/_tests/build-init-files.py -v --root .
language: system
pass_filenames: false
- repo: local
hooks:
- id: pyupgrade
name: pyupgrade
entry: ./activated.py pyupgrade --py39-plus --keep-runtime-typing
language: system
types: [python]
- repo: local
hooks:
- id: black
Expand Down Expand Up @@ -92,20 +85,6 @@ repos:
entry: ./activated.py mypy
language: system
pass_filenames: false
- repo: local
hooks:
- id: isort
name: isort
entry: ./activated.py isort
language: system
types: [python]
- repo: local
hooks:
- id: flake8
name: Flake8
entry: ./activated.py flake8
language: system
types: [python]
- repo: local
hooks:
- id: ruff
Expand Down
13 changes: 5 additions & 8 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,15 +57,13 @@ to configure how the tests are run. For example, for more logging: change the lo
```bash
sh install.sh -d
. ./activate
black . && isort benchmarks build_scripts chia tests tools *.py && mypy && flake8 benchmarks build_scripts chia tests tools *.py && pylint benchmarks build_scripts chia tests tools *.py
black . && ruff check --fix && mypy && pylint benchmarks build_scripts chia tests tools *.py
py.test tests -v --durations 0
```

The [black library](https://black.readthedocs.io/en/stable/) is used as an automatic style formatter to make things easier.
The [flake8 library](https://readthedocs.org/projects/flake8/) helps ensure consistent style.
The [Mypy library](https://mypy.readthedocs.io/en/stable/) is very useful for ensuring objects are of the correct type, so try to always add the type of the return value, and the type of local variables.
The [isort library](https://isort.readthedocs.io) is used to sort, group and validate imports in all python files.
The [Ruff library](https://docs.astral.sh) is used to further lint all of the python files
The [Ruff library](https://docs.astral.sh) is used to sort, group, validate imports, ensure consistent style, and further lint all of the python files

If you want verbose logging for tests, edit the `tests/pytest.ini` file.

Expand All @@ -84,10 +82,9 @@ provided configuration with `pre-commit install`.
1. Install python extension
2. Set the environment to `./venv/bin/python`
3. Install mypy plugin
4. Preferences > Settings > Python > Linting > flake8 enabled
5. Preferences > Settings > Python > Linting > mypy enabled
6. Preferences > Settings > Formatting > Python > Provider > black
7. Preferences > Settings > mypy > Targets: set to `./chia`
4. Preferences > Settings > Python > Linting > mypy enabled
5. Preferences > Settings > Formatting > Python > Provider > black
6. Preferences > Settings > mypy > Targets: set to `./chia`

## Configure Pycharm

Expand Down
6 changes: 5 additions & 1 deletion Install.ps1
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,11 @@ foreach ($extra in $extras)
$extras_cli += $extra
}

./Setup-poetry.ps1 -pythonVersion "$pythonVersion"
if (-not (Get-Item -ErrorAction SilentlyContinue ".penv/Scripts/poetry.exe").Exists)
{
./Setup-poetry.ps1 -pythonVersion "$pythonVersion"
}

.penv/Scripts/poetry env use $(py -"$pythonVersion" -c 'import sys; print(sys.executable)')
.penv/Scripts/poetry install @extras_cli

Expand Down
2 changes: 1 addition & 1 deletion benchmarks/block_ref.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ async def main(db_path: Path) -> None:
timing += one_call
assert gen is not None

print(f"get_block_generator(): {timing/REPETITIONS:0.3f}s")
print(f"get_block_generator(): {timing / REPETITIONS:0.3f}s")

blockchain.shut_down()

Expand Down
4 changes: 2 additions & 2 deletions benchmarks/block_store.py
Original file line number Diff line number Diff line change
Expand Up @@ -447,7 +447,7 @@ async def run_add_block_benchmark(version: int) -> None:
print("profiling get_block_records_close_to_peak")

start = monotonic()
block_dict, peak_h = await block_store.get_block_records_close_to_peak(99)
block_dict, _peak_h = await block_store.get_block_records_close_to_peak(99)
assert len(block_dict) == 100

stop = monotonic()
Expand Down Expand Up @@ -490,7 +490,7 @@ async def run_add_block_benchmark(version: int) -> None:
print(f"all tests completed in {all_test_time:0.4f}s")

db_size = os.path.getsize(Path("block-store-benchmark.db"))
print(f"database size: {db_size/1000000:.3f} MB")
print(f"database size: {db_size / 1000000:.3f} MB")


if __name__ == "__main__":
Expand Down
4 changes: 2 additions & 2 deletions benchmarks/coin_store.py
Original file line number Diff line number Diff line change
Expand Up @@ -293,14 +293,14 @@ async def run_new_block_benchmark(version: int) -> None:
if verbose:
print("")
print(
f"{total_time:0.4f}s, GET COINS REMOVED AT HEIGHT {block_height-1} blocks, "
f"{total_time:0.4f}s, GET COINS REMOVED AT HEIGHT {block_height - 1} blocks, "
f"found {found_coins} coins in total"
)
all_test_time += total_time
print(f"all tests completed in {all_test_time:0.4f}s")

db_size = os.path.getsize(Path("coin-store-benchmark.db"))
print(f"database size: {db_size/1000000:.3f} MB")
print(f"database size: {db_size / 1000000:.3f} MB")


if __name__ == "__main__":
Expand Down
2 changes: 1 addition & 1 deletion build_scripts/check_dependency_artifacts.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ def excepted(path: pathlib.Path) -> bool:
# TODO: This should be implemented with a real file name parser though i'm
# uncertain at the moment what package that would be.

name, dash, rest = path.name.partition("-")
name, _dash, _rest = path.name.partition("-")
return name in excepted_packages


Expand Down
10 changes: 5 additions & 5 deletions chia/_tests/blockchain/test_blockchain.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ async def make_empty_blockchain(constants: ConsensusConstants) -> AsyncIterator[
Provides a list of 10 valid blocks, as well as a blockchain with 9 blocks added to it.
"""

async with create_blockchain(constants, 2) as (bc, db_wrapper):
async with create_blockchain(constants, 2) as (bc, _):
yield bc


Expand Down Expand Up @@ -606,7 +606,7 @@ async def do_test_invalid_icc_sub_slot_vdf(
),
keychain=keychain,
)
async with create_blockchain(bt_high_iters.constants, db_version) as (bc1, db_wrapper):
async with create_blockchain(bt_high_iters.constants, db_version) as (bc1, _):
blocks = bt_high_iters.get_consecutive_blocks(10)
for block in blocks:
if (
Expand Down Expand Up @@ -1850,8 +1850,8 @@ async def test_pre_validation(
)
end = time.time()
log.info(f"Total time: {end - start} seconds")
log.info(f"Average pv: {sum(times_pv)/(len(blocks)/n_at_a_time)}")
log.info(f"Average rb: {sum(times_rb)/(len(blocks))}")
log.info(f"Average pv: {sum(times_pv) / (len(blocks) / n_at_a_time)}")
log.info(f"Average rb: {sum(times_rb) / (len(blocks))}")


class TestBodyValidation:
Expand Down Expand Up @@ -2775,7 +2775,7 @@ async def test_invalid_cost_in_block(
block_generator, max_cost, mempool_mode=False, height=softfork_height, constants=bt.constants
)
fork_info = ForkInfo(block_2.height - 1, block_2.height - 1, block_2.prev_header_hash)
result, err, _ = await b.add_block(
_result, err, _ = await b.add_block(
block_2,
PreValidationResult(None, uint64(1), npc_result.conds, False, uint32(0)),
None,
Expand Down
8 changes: 4 additions & 4 deletions chia/_tests/clvm/test_chialisp_deserialization.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,16 +86,16 @@ def test_deserialization_large_numbers():
def test_overflow_atoms():
b = hexstr_to_bytes(serialized_atom_overflow(0xFFFFFFFF))
with pytest.raises(Exception):
cost, output = DESERIALIZE_MOD.run_with_cost(INFINITE_COST, [b])
_cost, _output = DESERIALIZE_MOD.run_with_cost(INFINITE_COST, [b])

b = hexstr_to_bytes(serialized_atom_overflow(0x3FFFFFFFF))
with pytest.raises(Exception):
cost, output = DESERIALIZE_MOD.run_with_cost(INFINITE_COST, [b])
_cost, _output = DESERIALIZE_MOD.run_with_cost(INFINITE_COST, [b])

b = hexstr_to_bytes(serialized_atom_overflow(0xFFFFFFFFFF))
with pytest.raises(Exception):
cost, output = DESERIALIZE_MOD.run_with_cost(INFINITE_COST, [b])
_cost, _output = DESERIALIZE_MOD.run_with_cost(INFINITE_COST, [b])

b = hexstr_to_bytes(serialized_atom_overflow(0x1FFFFFFFFFF))
with pytest.raises(Exception):
cost, output = DESERIALIZE_MOD.run_with_cost(INFINITE_COST, [b])
_cost, _output = DESERIALIZE_MOD.run_with_cost(INFINITE_COST, [b])
4 changes: 2 additions & 2 deletions chia/_tests/clvm/test_puzzles.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
T1 = CoinTimestamp(1, uint32(10000000))
T2 = CoinTimestamp(5, uint32(10003000))

MAX_BLOCK_COST_CLVM = int(1e18)
MAX_BLOCK_COST_CLVM = 10**18


def secret_exponent_for_index(index: int) -> int:
Expand Down Expand Up @@ -206,7 +206,7 @@ def test_p2_delegated_puzzle_or_hidden_puzzle_with_hidden_puzzle():

def do_test_spend_p2_delegated_puzzle_or_hidden_puzzle_with_delegated_puzzle(hidden_pub_key_index):
key_lookup = KeyTool()
payments, conditions = default_payments_and_conditions(1, key_lookup)
_payments, conditions = default_payments_and_conditions(1, key_lookup)

hidden_puzzle = p2_conditions.puzzle_for_conditions(conditions)
hidden_public_key = public_key_for_index(hidden_pub_key_index, key_lookup)
Expand Down
4 changes: 2 additions & 2 deletions chia/_tests/clvm/test_singletons.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ async def make_and_spend_bundle(
spend_bundle = cost_logger.add_cost(cost_log_msg, spend_bundle)

try:
result, error = await sim_client.push_tx(spend_bundle)
_result, error = await sim_client.push_tx(spend_bundle)
if error is None:
await sim.farm_block()
elif ex_error is not None:
Expand Down Expand Up @@ -334,7 +334,7 @@ async def test_singleton_top_layer(version, cost_logger):
DELAY_TIME,
DELAY_PH,
)
result, error = await sim_client.push_tx(SpendBundle([to_delay_ph_coinsol], G2Element()))
_result, error = await sim_client.push_tx(SpendBundle([to_delay_ph_coinsol], G2Element()))
assert error == Err.ASSERT_SECONDS_RELATIVE_FAILED

# SPEND TO DELAYED PUZZLE HASH
Expand Down
2 changes: 1 addition & 1 deletion chia/_tests/clvm/test_spend_sim.py
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ async def test_all_endpoints():
],
G2Element(),
)
result, error = await sim_client.push_tx(bundle)
_result, error = await sim_client.push_tx(bundle)
assert not error
# get_all_mempool_tx_ids
mempool_items = await sim_client.get_all_mempool_tx_ids()
Expand Down
8 changes: 4 additions & 4 deletions chia/_tests/cmds/test_click_types.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ def test_click_tx_fee_type() -> None:
TransactionFeeParamType().convert(overflow_decimal_str, None, None)
# Test Type Failures
with pytest.raises(BadParameter):
TransactionFeeParamType().convert(float(0.01), None, None)
TransactionFeeParamType().convert(0.01, None, None)


def test_click_amount_type() -> None:
Expand Down Expand Up @@ -139,7 +139,7 @@ def test_click_address_type() -> None:
AddressParamType().convert(burn_bad_prefix, None, None)
# Test Type Failures
with pytest.raises(BadParameter):
AddressParamType().convert(float(0.01), None, None)
AddressParamType().convert(0.01, None, None)

# check class error handling
with pytest.raises(ValueError):
Expand Down Expand Up @@ -170,7 +170,7 @@ def test_click_bytes32_type() -> None:
Bytes32ParamType().convert("test", None, None)
# Test Type Failures
with pytest.raises(BadParameter):
Bytes32ParamType().convert(float(0.01), None, None)
Bytes32ParamType().convert(0.01, None, None)


def test_click_uint64_type() -> None:
Expand All @@ -192,4 +192,4 @@ def test_click_uint64_type() -> None:
Uint64ParamType().convert(str(overflow_ammt), None, None)
# Test Type Failures
with pytest.raises(BadParameter):
Uint64ParamType().convert(float(0.01), None, None)
Uint64ParamType().convert(0.01, None, None)
4 changes: 2 additions & 2 deletions chia/_tests/cmds/test_cmds_util.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ async def test_failure_output_no_traceback(
) as (client, _):
await client.fetch(path="/table", request_json={"response": expected_response})

out, err = capsys.readouterr()
out, _err = capsys.readouterr()

assert "ResponseFailureError" not in out
assert "Traceback:" not in out
Expand All @@ -69,7 +69,7 @@ async def test_failure_output_with_traceback(
) as (client, _):
await client.fetch(path="/table", request_json={"response": expected_response})

out, err = capsys.readouterr()
out, _err = capsys.readouterr()
assert sample_traceback_json not in out
assert sample_traceback in out

Expand Down
Loading

0 comments on commit e327b08

Please sign in to comment.