Skip to content

Commit

Permalink
Merge branch 'main' into less_getattr_in_wschiaconnection
Browse files Browse the repository at this point in the history
  • Loading branch information
altendky committed Nov 6, 2024
2 parents 71bde5a + 15079c6 commit 7d6d287
Show file tree
Hide file tree
Showing 189 changed files with 1,163 additions and 1,200 deletions.
10 changes: 0 additions & 10 deletions .flake8

This file was deleted.

2 changes: 1 addition & 1 deletion .github/workflows/dependency-review.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,5 +21,5 @@ jobs:
- name: "Dependency Review"
uses: actions/dependency-review-action@v4
with:
allow-dependencies-licenses: pkg:pypi/pyinstaller
allow-dependencies-licenses: pkg:pypi/pylint, pkg:pypi/pyinstaller
deny-licenses: AGPL-1.0-only, AGPL-1.0-or-later, AGPL-1.0-or-later, AGPL-3.0-or-later, GPL-1.0-only, GPL-1.0-or-later, GPL-2.0-only, GPL-2.0-or-later, GPL-3.0-only, GPL-3.0-or-later
6 changes: 2 additions & 4 deletions .github/workflows/upload-pypi-source.yml
Original file line number Diff line number Diff line change
Expand Up @@ -118,10 +118,8 @@ jobs:
python:
- major_dot_minor: "3.10"
check:
- name: black
command: black --check --diff .
- name: flake8
command: flake8 benchmarks build_scripts chia tools *.py
- name: ruff
command: ruff format --check --diff .
- name: generated protocol tests
command: |
python3 -m chia._tests.util.build_network_protocol_files
Expand Down
5 changes: 0 additions & 5 deletions .isort.cfg

This file was deleted.

34 changes: 10 additions & 24 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,16 +15,9 @@ repos:
pass_filenames: false
- repo: local
hooks:
- id: pyupgrade
name: pyupgrade
entry: ./activated.py pyupgrade --py39-plus --keep-runtime-typing
language: system
types: [python]
- repo: local
hooks:
- id: black
name: black
entry: ./activated.py black
- id: ruff_format
name: ruff format
entry: ./activated.py ruff format
language: system
require_serial: true
types_or: [python, pyi]
Expand Down Expand Up @@ -78,6 +71,13 @@ repos:
entry: ./activated.py python chia/util/virtual_project_analysis.py print_cycles --directory chia --config virtual_project.yaml
language: system
pass_filenames: false
- repo: local
hooks:
- id: ruff
name: Ruff
entry: ./activated.py ruff check --fix
language: system
types: [python]
- repo: local
hooks:
- id: build mypy.ini
Expand All @@ -92,17 +92,3 @@ repos:
entry: ./activated.py mypy
language: system
pass_filenames: false
- repo: local
hooks:
- id: flake8
name: Flake8
entry: ./activated.py flake8
language: system
types: [python]
- repo: local
hooks:
- id: ruff
name: Ruff
entry: ./activated.py ruff check --fix
language: system
types: [python]
30 changes: 3 additions & 27 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,14 +57,12 @@ to configure how the tests are run. For example, for more logging: change the lo
```bash
sh install.sh -d
. ./activate
black . && ruff check --fix && mypy && flake8 benchmarks build_scripts chia tests tools *.py && pylint benchmarks build_scripts chia tests tools *.py
py.test tests -v --durations 0
ruff format && ruff check --fix && mypy
pytest tests -v --durations 0
```

The [black library](https://black.readthedocs.io/en/stable/) is used as an automatic style formatter to make things easier.
The [flake8 library](https://readthedocs.org/projects/flake8/) helps ensure consistent style.
The [Mypy library](https://mypy.readthedocs.io/en/stable/) is very useful for ensuring objects are of the correct type, so try to always add the type of the return value, and the type of local variables.
The [Ruff library](https://docs.astral.sh) is used to sort, group, validate imports, and further lint all of the python files
The [Ruff library](https://docs.astral.sh) is used to format, sort, group, validate imports, ensure consistent style, and further lint all of the python files

If you want verbose logging for tests, edit the `tests/pytest.ini` file.

Expand All @@ -78,28 +76,6 @@ To install pre-commit on your system see https://pre-commit.com/#installation. A
with `pre-commit run` or let it trigger the hooks automatically before each commit by installing the
provided configuration with `pre-commit install`.

## Configure VS code

1. Install python extension
2. Set the environment to `./venv/bin/python`
3. Install mypy plugin
4. Preferences > Settings > Python > Linting > flake8 enabled
5. Preferences > Settings > Python > Linting > mypy enabled
6. Preferences > Settings > Formatting > Python > Provider > black
7. Preferences > Settings > mypy > Targets: set to `./chia`

## Configure Pycharm

Pycharm is an amazing and beautiful python IDE that some of us use to work on this project.
If you combine it with python black and formatting on save, you will get a very efficient
workflow. It's also especially efficient for git branching, cherry-picking, committing and pushing.

1. Run blackd in a terminal
2. Install BlackConnect plugin
3. Set to run python black on save
4. Set line length to 120
5. Install the linters in the root directory

## Testnets and review environments

The current official testnet is testnet10. Look at `chia/util/initial_config.yaml` to see the configuration parameters
Expand Down
6 changes: 5 additions & 1 deletion Install.ps1
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,11 @@ foreach ($extra in $extras)
$extras_cli += $extra
}

./Setup-poetry.ps1 -pythonVersion "$pythonVersion"
if (-not (Get-Item -ErrorAction SilentlyContinue ".penv/Scripts/poetry.exe").Exists)
{
./Setup-poetry.ps1 -pythonVersion "$pythonVersion"
}

.penv/Scripts/poetry env use $(py -"$pythonVersion" -c 'import sys; print(sys.executable)')
.penv/Scripts/poetry install @extras_cli

Expand Down
2 changes: 1 addition & 1 deletion activated.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/bin/sh
#!/usr/bin/env bash

set -o errexit

Expand Down
2 changes: 1 addition & 1 deletion benchmarks/block_ref.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ async def main(db_path: Path) -> None:
timing += one_call
assert gen is not None

print(f"get_block_generator(): {timing/REPETITIONS:0.3f}s")
print(f"get_block_generator(): {timing / REPETITIONS:0.3f}s")

blockchain.shut_down()

Expand Down
4 changes: 2 additions & 2 deletions benchmarks/block_store.py
Original file line number Diff line number Diff line change
Expand Up @@ -447,7 +447,7 @@ async def run_add_block_benchmark(version: int) -> None:
print("profiling get_block_records_close_to_peak")

start = monotonic()
block_dict, peak_h = await block_store.get_block_records_close_to_peak(99)
block_dict, _peak_h = await block_store.get_block_records_close_to_peak(99)
assert len(block_dict) == 100

stop = monotonic()
Expand Down Expand Up @@ -490,7 +490,7 @@ async def run_add_block_benchmark(version: int) -> None:
print(f"all tests completed in {all_test_time:0.4f}s")

db_size = os.path.getsize(Path("block-store-benchmark.db"))
print(f"database size: {db_size/1000000:.3f} MB")
print(f"database size: {db_size / 1000000:.3f} MB")


if __name__ == "__main__":
Expand Down
4 changes: 2 additions & 2 deletions benchmarks/coin_store.py
Original file line number Diff line number Diff line change
Expand Up @@ -293,14 +293,14 @@ async def run_new_block_benchmark(version: int) -> None:
if verbose:
print("")
print(
f"{total_time:0.4f}s, GET COINS REMOVED AT HEIGHT {block_height-1} blocks, "
f"{total_time:0.4f}s, GET COINS REMOVED AT HEIGHT {block_height - 1} blocks, "
f"found {found_coins} coins in total"
)
all_test_time += total_time
print(f"all tests completed in {all_test_time:0.4f}s")

db_size = os.path.getsize(Path("coin-store-benchmark.db"))
print(f"database size: {db_size/1000000:.3f} MB")
print(f"database size: {db_size / 1000000:.3f} MB")


if __name__ == "__main__":
Expand Down
2 changes: 1 addition & 1 deletion build_scripts/build_license_directory.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash

# PULL IN LICENSES USING NPM - LICENSE CHECKER
npm install -g license-checker
Expand Down
2 changes: 1 addition & 1 deletion build_scripts/build_linux_deb-1-gui.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash

set -o errexit

Expand Down
2 changes: 1 addition & 1 deletion build_scripts/build_linux_deb-2-installer.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash

set -o errexit

Expand Down
2 changes: 1 addition & 1 deletion build_scripts/build_linux_rpm-1-gui.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash

set -o errexit

Expand Down
2 changes: 1 addition & 1 deletion build_scripts/build_linux_rpm-2-installer.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash

set -o errexit

Expand Down
2 changes: 1 addition & 1 deletion build_scripts/build_macos-1-gui.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash

set -o errexit -o nounset

Expand Down
2 changes: 1 addition & 1 deletion build_scripts/build_macos-2-installer.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash

set -o errexit -o nounset

Expand Down
2 changes: 1 addition & 1 deletion build_scripts/build_win_license_dir.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash

# PULL IN LICENSES USING NPM - LICENSE CHECKER
npm install -g license-checker
Expand Down
2 changes: 1 addition & 1 deletion build_scripts/check_dependency_artifacts.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ def excepted(path: pathlib.Path) -> bool:
# TODO: This should be implemented with a real file name parser though i'm
# uncertain at the moment what package that would be.

name, dash, rest = path.name.partition("-")
name, _dash, _rest = path.name.partition("-")
return name in excepted_packages


Expand Down
2 changes: 1 addition & 1 deletion build_scripts/clean-runner.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# Cleans up files/directories that may be left over from previous runs for a clean slate before starting a new build

set -o errexit
Expand Down
2 changes: 0 additions & 2 deletions chia/_tests/blockchain/test_augmented_chain.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@

@dataclass
class NullBlockchain:

if TYPE_CHECKING:
from chia.consensus.blockchain_interface import BlocksProtocol

Expand Down Expand Up @@ -72,7 +71,6 @@ def BR(b: FullBlock) -> BlockRecord:
@pytest.mark.anyio
@pytest.mark.limit_consensus_modes(reason="save time")
async def test_augmented_chain(default_10000_blocks: list[FullBlock]) -> None:

blocks = default_10000_blocks
# this test blockchain is expected to have block generators at these
# heights:
Expand Down
10 changes: 5 additions & 5 deletions chia/_tests/blockchain/test_blockchain.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ async def make_empty_blockchain(constants: ConsensusConstants) -> AsyncIterator[
Provides a list of 10 valid blocks, as well as a blockchain with 9 blocks added to it.
"""

async with create_blockchain(constants, 2) as (bc, db_wrapper):
async with create_blockchain(constants, 2) as (bc, _):
yield bc


Expand Down Expand Up @@ -606,7 +606,7 @@ async def do_test_invalid_icc_sub_slot_vdf(
),
keychain=keychain,
)
async with create_blockchain(bt_high_iters.constants, db_version) as (bc1, db_wrapper):
async with create_blockchain(bt_high_iters.constants, db_version) as (bc1, _):
blocks = bt_high_iters.get_consecutive_blocks(10)
for block in blocks:
if (
Expand Down Expand Up @@ -1850,8 +1850,8 @@ async def test_pre_validation(
)
end = time.time()
log.info(f"Total time: {end - start} seconds")
log.info(f"Average pv: {sum(times_pv)/(len(blocks)/n_at_a_time)}")
log.info(f"Average rb: {sum(times_rb)/(len(blocks))}")
log.info(f"Average pv: {sum(times_pv) / (len(blocks) / n_at_a_time)}")
log.info(f"Average rb: {sum(times_rb) / (len(blocks))}")


class TestBodyValidation:
Expand Down Expand Up @@ -2775,7 +2775,7 @@ async def test_invalid_cost_in_block(
block_generator, max_cost, mempool_mode=False, height=softfork_height, constants=bt.constants
)
fork_info = ForkInfo(block_2.height - 1, block_2.height - 1, block_2.prev_header_hash)
result, err, _ = await b.add_block(
_result, err, _ = await b.add_block(
block_2,
PreValidationResult(None, uint64(1), npc_result.conds, False, uint32(0)),
None,
Expand Down
8 changes: 4 additions & 4 deletions chia/_tests/clvm/test_chialisp_deserialization.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,16 +86,16 @@ def test_deserialization_large_numbers():
def test_overflow_atoms():
b = hexstr_to_bytes(serialized_atom_overflow(0xFFFFFFFF))
with pytest.raises(Exception):
cost, output = DESERIALIZE_MOD.run_with_cost(INFINITE_COST, [b])
_cost, _output = DESERIALIZE_MOD.run_with_cost(INFINITE_COST, [b])

b = hexstr_to_bytes(serialized_atom_overflow(0x3FFFFFFFF))
with pytest.raises(Exception):
cost, output = DESERIALIZE_MOD.run_with_cost(INFINITE_COST, [b])
_cost, _output = DESERIALIZE_MOD.run_with_cost(INFINITE_COST, [b])

b = hexstr_to_bytes(serialized_atom_overflow(0xFFFFFFFFFF))
with pytest.raises(Exception):
cost, output = DESERIALIZE_MOD.run_with_cost(INFINITE_COST, [b])
_cost, _output = DESERIALIZE_MOD.run_with_cost(INFINITE_COST, [b])

b = hexstr_to_bytes(serialized_atom_overflow(0x1FFFFFFFFFF))
with pytest.raises(Exception):
cost, output = DESERIALIZE_MOD.run_with_cost(INFINITE_COST, [b])
_cost, _output = DESERIALIZE_MOD.run_with_cost(INFINITE_COST, [b])
2 changes: 1 addition & 1 deletion chia/_tests/clvm/test_puzzles.py
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,7 @@ def test_p2_delegated_puzzle_or_hidden_puzzle_with_hidden_puzzle():

def do_test_spend_p2_delegated_puzzle_or_hidden_puzzle_with_delegated_puzzle(hidden_pub_key_index):
key_lookup = KeyTool()
payments, conditions = default_payments_and_conditions(1, key_lookup)
_payments, conditions = default_payments_and_conditions(1, key_lookup)

hidden_puzzle = p2_conditions.puzzle_for_conditions(conditions)
hidden_public_key = public_key_for_index(hidden_pub_key_index, key_lookup)
Expand Down
4 changes: 2 additions & 2 deletions chia/_tests/clvm/test_singletons.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ async def make_and_spend_bundle(
spend_bundle = cost_logger.add_cost(cost_log_msg, spend_bundle)

try:
result, error = await sim_client.push_tx(spend_bundle)
_result, error = await sim_client.push_tx(spend_bundle)
if error is None:
await sim.farm_block()
elif ex_error is not None:
Expand Down Expand Up @@ -334,7 +334,7 @@ async def test_singleton_top_layer(version, cost_logger):
DELAY_TIME,
DELAY_PH,
)
result, error = await sim_client.push_tx(SpendBundle([to_delay_ph_coinsol], G2Element()))
_result, error = await sim_client.push_tx(SpendBundle([to_delay_ph_coinsol], G2Element()))
assert error == Err.ASSERT_SECONDS_RELATIVE_FAILED

# SPEND TO DELAYED PUZZLE HASH
Expand Down
2 changes: 1 addition & 1 deletion chia/_tests/clvm/test_spend_sim.py
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ async def test_all_endpoints():
],
G2Element(),
)
result, error = await sim_client.push_tx(bundle)
_result, error = await sim_client.push_tx(bundle)
assert not error
# get_all_mempool_tx_ids
mempool_items = await sim_client.get_all_mempool_tx_ids()
Expand Down
Loading

0 comments on commit 7d6d287

Please sign in to comment.