Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI specification #13

Open
5 tasks
6r1d opened this issue Sep 14, 2023 · 2 comments
Open
5 tasks

CI specification #13

6r1d opened this issue Sep 14, 2023 · 2 comments

Comments

@6r1d
Copy link
Contributor

6r1d commented Sep 14, 2023

We need a CI mechanism to check the examples stay up-to-date and to notify us if something goes wrong.
It means launching each example per language in a sequence with pauses, allowing them to finish a certain logical sequence given the dependencies.

Topics to discuss

  • What triggers a check
    • for example, new PR in examples
    • if possible, Iroha repo's dev branch merge
  • Check intervals to react to Iroha codebase changes
    • It is an alternative to another repo's merge, and I'd prefer to react to a merge in iroha2-dev
  • Notification mechanism for the issues
    • I'm thinking about a Telegram message from a bot on something breaking, telling which example and language it is, and what passed
  • Should we separate SDK tests per language or keep a single GitHub action for all languages?
  • Scripting language to run the checks
    • Python seems appropriate despite many issues
@0x009922
Copy link
Contributor

I would like to follow these principles:

  • All examples should target a single version of Iroha
  • Each example should be written as a test, i.e. it should contain an assertion of the effects it has made. If assertion fails, example execution fails too.
  • Each example should be isolated, i.e. it shouldn't rely on side effects made by other examples.
  • Examples should not start Iroha; Iroha should be started and configured externally.
  • All examples have the same assumptions about how Iroha is configured before their execution

If we follow them, we can achieve some benefits:

  • Easy to test examples with different versions of Iroha: it will be reduced to simple replacement of the compiled binary
  • Examples are unified
  • Isolated examples-as-tests should be good educational pieces of code

The CI of this repo will do the following: on push to the main and/or on PRs, it runs all of the examples using a specified version of Iroha. (Later, we might optimise it and run only those parts that are really updated.) This way, each time we update the examples or the version of Iroha they target, we will see if they all actually work.

The CI of Iroha repo will do the following: On push to the dev branch and/or on PRs, it will build a new version of Iroha, fetch the examples repo, and run all of the examples with freshly built version. This way, each time Iroha developers do some changes, they will see how it is reflected on the examples.

Answering the topics:

  • What triggers a check: this repo will run the examples on each push/PR. Iroha repo will do the same, when it happens there.
  • Check intervals to react to Iroha codebase changes: redundant. You open/merge a PR - you see results right away.
  • Notification mechanism for the issues: redundant, same reason
  • Should we separate SDK tests per language or keep a single GitHub action for all languages? Not now. We can introduce refined workflows triggering later (it is an optimisation). Since each example is isolated, we can run them concurrently and collect all errors at once.
  • Scripting language to run the checks: we use Python as a scripting language in Iroha CI, I vote for it.

@6r1d
Copy link
Contributor Author

6r1d commented Sep 15, 2023

Let's approach the examples this way; we mostly agree here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants