Skip to content

Latest commit

 

History

History
458 lines (356 loc) · 29.7 KB

CONTRIBUTING.md

File metadata and controls

458 lines (356 loc) · 29.7 KB

Contributing



Hi.  👋🏽  👋   We are happy you are here.  🎉🌟

Thank you so much for your interest in contributing!

exercsim/Python is one of many programming language tracks on exercism(dot)org. This repo holds all the instructions, tests, code, & support files for Python exercises currently under development or implemented & available for students.

🌟   Track exercises support Python 3.8.
🌟   Track tooling (test-runner, representer, analyzer, and Continuous Integration) runs on Python 3.9.

Exercises are grouped into concept exercises which teach the Python syllabus, and practice exercises, which are unlocked by progressing in the syllabus tree  🌴  . Concept exercises are constrained to a small set of language or syntax features. Practice exercises are open-ended, and can be used to practice concepts learned, try out new techniques, and play. These two exercise groupings can be found in the track config.json, and under the python/exercises directory.


🌟🌟  If you have not already done so, please take a moment to read our Code of Conduct. 🌟🌟 
It might also be helpful to look at Being a Good Community Member & The words that we use, and Pull Requests.

Some defined roles in our community: Contributors | Mentors | Maintainers | Admins


✨ 🦄  Want to jump directly into Exercism specifications & detail?
     Structure | Tasks | Concepts | Concept Exercises | Practice Exercises | Presentation
     Writing Style Guide | Markdown Specification ( ✨   versions available in contributing on exercism(dot)org.)


🐛 Did you find a bug?

It is not uncommon to discover typos, confusing directions, or incorrect implementations of certain tests or code examples. Or you might have a great suggestion for a hint to aid students ( 💙  ), see optimizations for exemplar or test code, find missing test cases to add, or want to correct factual and/or logical errors. Or maybe you have a great idea for an exercise or feature (❗ ).

Our track is always a work in progress! 🌟🌟
Please 📛  Open an issue 📛 , and let us know what you have found/suggest.


🚧 Did you write a patch that fixes a bug?

Before you get started, please review Pull Requests.

💛 💙  We Warmly Welcome Pull Requests that are:

             1️⃣     Small, contained fixes for typos/grammar/punctuation/code syntax on [one] exercise,
             2️⃣     Medium changes that have been agreed/discussed via a filed issue,
             3️⃣     Contributions from our help wanted issue list,
             4️⃣     Larger (and previously agreed-upon) contributions from recent & regular (within the last 6 months) contributors.

When in doubt, 📛  Open an issue 📛 . We will happily discuss your proposed change.
🐍  But we should talk before you take a whole lot of time or energy implementing anything.


In General


  • Please make sure to have a quick read-through of our Exercism Pull Requests document before jumping in. 😅
  • Maintainers are happy to review your work and help troubleshoot with you. 💛 💙 
    • Requests are reviewed as soon as is practical/possible.
    • (❗ ) Reviewers may be in a different timezone ⌚ , or tied up  🧶  with other tasks.
    • Please wait at least 72 hours before pinging.
  • If you need help, comment in the Pull Request/issue.  🙋🏽‍♀️  
  • If you would like in-progress feedback/discussion, please mark your Pull Request as a [draft]
  • Pull Requests should be focused around a single exercise, issue, or change.
  • Pull Request titles and descriptions should make clear what has changed and why.
    • Please link  🔗  to any related issues the PR addresses.
  • 📛  Open an issue 📛  and discuss it with  🧰  maintainers before:
    • creating a Pull Request making significant or breaking changes.
    • for changes across multiple exercises, even if they are typos or small.
    • anything that is going to require doing a lot of work (on your part or the maintainers part).
  • Follow coding standards found in PEP8 ("For Humans" version here).
  • All files should have a proper EOL. This means one carriage return at the end of the final line of text files.
  • Otherwise, watch out  ⚠️  for trailing spaces, extra blank lines, extra spaces, and spaces in blank lines.
  • Continuous Integration is going to run a lot of checks. Try to understand & fix any failures.

⚠️  Pre-Commit Checklist ⚠️

  •  Update & rebase your branch with any (recent) upstream changes.
  •  Spell and grammar check all prose changes.
  •  Run Prettier on all markdown and JSON files.
  •  Run flake8 with flake8 config to check general code style standards.
  •   Run pylint with pylint config to check extended code style standards.
  •  Use pytest or the python-track-test-runner to test any changed example.py/exemplar.pyfiles  against their associated test files.
  •  Similarly, use pytest or the python-track-test-runner to test any changed test files.
    • Check that tests fail properly, as well as succeed.  (e.g., make some tests fail on purpose to "test the tests" & failure messages).
  •  Double-check all files for proper EOL.
  •  Regenerate exercise documents when you modified or created a hints.md file for a practice exercise.
  •  Regenerate the test file if you modified or created a JinJa2 template file for a practice exercise.
    • Run the generated test file result against its example.py.
  •  Run configlet-lint if the track config.json, or any other exercise config.json has been modified.

Prose Writing Style and Standards


Non-code content (exercise introductions & instructions, hints, concept write-ups, documentation etc.) should be written in American English. We strive to watch the words we use.

When a word or phrase usage is contested/ambiguous, we default to what is best understood by our international community of learners, even if it "sounds a little weird" to a "native" American English speaker.

Our documents use Markdown, with certain alterations & additions. Here is our full Markdown Specification.  📐 We format/lint our Markdown with Prettier. ✨


Coding Standards


  1. We follow PEP8 ("For Humans" version here). In particular, we (mostly) follow the Google flavor of PEP8.
  2. We use flake8 to help us format Python code nicely. Our flake8 config file is .flake8 in the top level of this repo.
  3. We use pylint to catch what flake8 doesn't. Our pylint config file is pylintrc in the top level of this repo.
  4. We use yapf to auto-format our python files. Our .style.yapf config file is .style.yapf in the top level of this repo.

General Code Style Summary
  • spaces, never Tabs
  • 4 space indentation
  • 120 character per line limit (as opposed to the default limit of 79)
  • Variable, function, and method names should be lower_case_with_underscores (aka "snake case")
  • Classes should be named in TitleCase (aka "camel case")
  • No single letter variable names outside of a lambda. This includes loop variables and comprehensions.
  • Refrain from putting list, tuple, set, or dict members on their own lines. Fit as many data members as can be easily read on one line, before wrapping to a second.
  • If a data structure spreads to more than one line and a break (for clarity) is needed, prefer breaking after the opening bracket.
  • Avoid putting closing brackets on their own lines. Prefer closing a bracket right after the last element.
  • Use ' and not " as the quote character by default.
  • Use """ for docstrings.
  • Prefer implicit line joining for long strings.
  • Prefer enclosing imports in (), and putting each on their own line when importing multiple methods.
  • Two lines between Classes, one line between functions. Other vertical whitespace as needed to help readability.
  • Always use an EOL to end a file.
Test File Style (concept exercises)
  • Unittest.TestCase syntax, with PyTest as a test runner.
    • We are transitioning to using more PyTest features/syntax, but are leaving Unittest syntax in place where possible.
    • Always check with a maintainer before introducing a PyTest feature into your tests.
  • Test Classes should be titled <ExerciseSlug>Test. e.g. class CardGamesTest(unittest.TestCase):
  • Test method names should begin with test_. Try to make test case names descriptive but not too long.
  • Favor parameterizing tests that only vary input data. Use unittest.TestCase.subTest for parameterization.
  • Avoid excessive line breaks or indentation - particularly in parameterized tests.
    • Excessive breaks & indentation within the for loops cause issues when formatting the test code for display on the website.
  • Use enumerate() where possible when indexes are needed. See Card Games for example usage.
  • Favor using names like inputs, data, input_data, test_data, or test_case_data for test inputs.
  • Favor using names like results, expected, result_data, expected_data, or expected_results for test outcomes.
  • Favor putting the assert failure message outside of self.assert(). Name it failure_msg. See Card Games for example usage.
  • Favor f-strings for dynamic failure messages. Please make your error messages as relevant and human-readable as possible.
  • We relate test cases to task number via a custom PyTest Marker.
  • We prefer test data files when test inputs/outputs are verbose.
    • These should be named with _data or _test_data at the end of the filename, and saved alongside the test case file.
    • See the Cater-Waiter exercise directory for an example of this setup.
    • Test data files need to be added under an editor key within config.json "files".
    • Check with a maintainer if you have questions or issues, or need help with an exercise config.json.
  • For new test files going forward, omit if __name__ == "__main__": unittest.main().
  • Lint with both flake8 and pylint.

If you have any questions or issues, don't hesitate to ask the maintainers -- they're always happy to help 💛 💙 

Some of our code is old and does not (yet) conform to all these standards.
We know it, and trust us, we are working on fixing it. But if you see  👀  something,  👄  say something. It will motivate us to fix it! 🌈


Python Versions


This track officially supports Python = 3.8
The track test runner, analyzer, and representer run in docker on python:3.9-slim.

  • All exercises should be written for compatibility with Python = 3.8 or 3.9.

  • Version backward incompatibility (e.g an exercise using a 3.8 or 3.9 only feature) should be clearly noted in any exercise hits, links, introductions or other notes.

  • Here is an example of how the Python documentation handles version-tagged  🏷  feature introduction.

  • Most exercises will work with Python 3.6+, and many are compatible with Python 2.7+.

    • Please do not change existing exercises to add new language features without consulting with a maintainer first.
    • We  💛 💙  modern Python, but we also want to avoid student confusion when it comes to which Python versions support brand-new features.
  • All test suites and example solutions must work in all Python versions that we currently support. When in doubt about a feature, please check with maintainers.



A Little More on Exercises


  • Each exercise must be self-contained. Please do not use or reference files that reside outside the given exercise directory. "Outside" files will not be included if a student fetches the exercise via the Command line Interface.

  • Each exercise/problem should include

    • a complete test suite,
    • an example/exemplar solution,
    • a stub file ready for student implementation.
  • For specifications, refer to the links below, depending on which type of exercise you are contributing to.

  • Practice exercise, descriptions and instructions come from a centralized, cross-track problem specifications repository.

    • Any updates or changes need to be proposed/approved in problem-specifications first.
    • If Python-specific changes become necessary, they need to be appended to the canonical instructions by creating a instructions.append.md file in this (exercism/Python) repository.
  • Practice Exercise Test Suits for many practice exercises are similarly auto-generated from data in problem specifications.

    • Any changes to them need to be proposed/discussed in the problem-specifications repository and approved by 3 track maintainers, since changes could potentially affect many (or all) exercism language tracks.
    • If Python-specific test changes become necessary, they can be appended to the exercise tests.toml file.
    • 📛  Please file an issue 📛  and check with maintainers before adding any Python-specific tests.


    ✅   Concept Exercise Checklist
    • .docs/hints.md
    • .docs/instructions.md
    • .docs/introduction.md
    • .meta/config.json
    • .meta/design.md
    • .meta/exemplar.py (exemplar solution)
    • <exercise-slug>_test.py (test file)
    • <exercise-slug>.py (stub file)
    • concepts/../introduction.md
    • concepts/../about.md
    • concepts/../links.json
    • concepts/../.meta/config.json
    ✅   Practice Exercise Checklist
    • .docs/instructions.md(required)
    • .docs/introduction.md(optional)
    • .docs/introduction.append.md(optional)
    • .docs/instructions.append.md (optional)
    • .docs/hints.md(optional)
    • .meta/config.json (required)
    • .meta/example.py (required)
    • .meta/design.md (optional)
    • .meta/template.j2 (template for generating tests from canonical data)
    • .meta/tests.toml (tests configuration from canonical data)
    • <exercise-slug>_test.py (auto-generated from canonical data)
    • <exercise-slug>.py (required)


External Libraries and Dependencies


Our tooling (runners, representers, and analyzers) runs in isolated containers within the exercism website. Because of this isolation, exercises cannot rely on third-party or external libraries. Any library needed for an exercise or exercise tests must be incorporated as part of a tooling build, and noted for students who are using the CLI to solve problems locally.

If your exercise depends on a third-party library (aka not part of standard Python), please consult with maintainers about it. We may or may not be able to accommodate the package.


Auto-Generated Test Files and Test Templates


Practice exercises inherit their definitions from the problem-specifications repository in the form of description files. Exercise introductions, instructions and (in the case of many, but not all) test files are then machine-generated for each language track.

Changes to practice exercise specifications should be raised/PR'd in problem-specifications and approved by 3 track maintainers. After an exercise change has gone through that process , related documents and tests for the Python track will need to be re-generated via configlet. Configlet is also used as part of the track CI, essential track and exercise linting, and other verification tasks.

If a practice exercise has an auto-generated <exercise_slug>_test.py file, there will be a .meta/template.j2 and a .meta/tests.toml file in the exercise directory. If an exercise implements Python track-specific tests, there may be a .meta/additional_tests.json to define them. These additional_tests.json files will automatically be included in test generation.

Exercise Structure with Auto-Generated Test Files

[<exercise-slug>/
├── .docs
│   └── instructions.md
├── .meta
│   ├── config.json
│   ├── example.py
│   ├── template.j2
│   └── tests.toml
├── <exercise_slug>.py #stub file
└── <exercise_slug_test.py #test file

Practice exercise <exercise_slug>_test.py files are generated/regenerated via the Python Track Test Generator.
Please reach out to a maintainer if you need any help with the process.


Implementing Practice Exercise Tests


If an unimplemented exercise has a canonical-data.json file in the problem-specifications repository, a generation template must be created. See the Python track test generator documentation for more information.

If an unimplemented exercise does not have a canonical-data.json file, the test file must be written manually.


Implementing Practice Exercise Example Solutions


Example solution files serve two purposes only:

  1. Verification of the tests
  2. Example implementation for mentor/student reference

Unlike concept exercise, practice exercise example.py files are NOT intended as as a "best practice" or "standard".
They are provided as proof that there is an acceptable and testable answer to the practice exercise.


Implementing Track-specific Practice Exercises


Implementing Track-specific Practice Exercises is similar to implementing a canonical exercise that has no canonical-data.json. But in addition to the tests, the exercise documents (instructions, etc.) will also need to be written manually. Carefully follow the structure of generated exercise documents and the exercism practice exercise specification.


Generating Practice Exercise Documents


You will need

1.  A local clone of the problem-specifications repository.
2.  configlet

For Individual Exercises

configlet generate <path/to/track> --spec-path path/to/problem/specifications --only example-exercise

For all Practice Exercises

configlet generate <path/to/track> --spec-path path/to/problem/specifications