-
Notifications
You must be signed in to change notification settings - Fork 144
Contributor Experience: Plan
Topics
- Theme 1: Help new contributors
- Theme 2: Build up working groups
- Theme 3: How to develop modules
- Theme 4: PR(/issue) backlog
- Timeline
The Contributor Experience had a wide remit, though is currently focusing on the following:
- Theme 1: Help new contributors (Initial focus)
- Theme 2: Build up Working Groups (Initial focus)
- Theme 3: Make it clearer how to develop modules
- Theme 4: Reduce issue/PR backlog
Discussion welcome in #ansible-community
or in Contributor Experience Etherpad
Identify and remove blockers for new contributors
Mozilla research tells us that:
Peer reviewed work and Mozilla data (via Mike Hoye) shows that:
- Contributors who received code reviews within 48 hours on their first bug have an exceptionally high rate of returning and contributing.
- Contributors who wait longer than 7 days for code review on their first bug have virtually zero percent likelihood of returning.
- Showing a contributor the next bug they can work on dramatically improves the odds of contributing.
In addition:
- New contributors could become future maintainers or core contributors
- It's easy to test out new ideas on people with no prior knowledge
- We get ~30 new contributor PRs per week, which is enough to draw conclusions from
- Analyzing
new_contributor
's gives us a good insight into how clear our docs, process and tests are. This is important as we often get sidetracked by regular contributors that have been through these issues before and overcome.
A few times a week, ideally daily
- What is confusing/ambigious
- Link to documentation where needed
- Provide human guidance to help them, acknowledge that docs could be improved. Link to where you've made a note to improve the docs
- Make a note to update docs and
@mention
them on the PR
These individuals show that the process works, though how exactly (and why doesn't it work for others)
Identify & document how these PRs got merged
- Build up community to help fix & fix these issues
We can count/track many things, need to be mindful of:
- can we influence what we track, if not is this useful
- some metrics are just "general trends", and not indicative of anything more than popularity
- Unique contributors, cumulative
- Stats to backup Mozilla research * How can we tell if a new_contributor returns for more * What is more: raises another PR, reviews another PR
Data
- Interquartile range for number of days to merge/close for PRs.
- With manual review we'd expect some PRs to be close/merged day they are created
How to influence
- It's possible we may need to update Core's
needs_triage
process if it's agreed their are PRs that are not being merged which should - Expect to shift the lower quartile down a bit by:
- Actually looking (human) and dealing with PRs (though this manual bit wouldn't always scale)
- Longer term by improving the process (issues identified by the above)
- A PR being closed at triage means the PR is invalid. This may indicate bad PRs (duplicate, already fixed, not an applicable fix/feature).
- It's possible we may need to update Core's
needs_triage
process if it's agreed their are PRs that are not being closed which should - Deprecated feature PRs should perhaps be have an auto message (and perhaps closed)
Data
How to influence
- It's possible we may need to update Core's
needs_triage
process if it's agreed their are PRs that are not- Indicates how understandable the CI failures are, as well as how easy to fix they are.
- Improvements to the CI error messages (and move to GitHub Checks API) should make the errors easier to understand and therefore we'd expect a reduction in time
- Would improving the wording for the link to failing tests reduce the duration
- Would improved documentation for certain CI failures help this?
- Looking at
label:new_contributor
and PRs that are failed CI for longest could indicate the types of CI failures that are harder to a contributor to understand. Addressing these could help reduce the longtail- Need to be mindful that the PR may never go CI green - Need someway of representing that differently days of red = days PR has been open.
- How long till a human has reviewed the PR
- Does a human review within a few days (rather than a few weeks/month) keep the contributor engaged/motivated
- This maybe more complex to analyse as we need to us
- Any issues should be added to the list of fixes
As issues (no matter how small) are identified they should be documented here |ss|strike through|se|
once complete with a date.
- Document argspec WIP
docs-argspec
- CI Issues
- Are issues found valid
- Are the error messages obvious
- GitHub Checks API should help this - Waiting on Shippable and Zuul
- Spot trends, do RCA, fix at source
By reviewing new_contributor PRs that have been merged we can identify what is working well
- The majority of PRs merged quickly appear to be
- docs
- Small fixes
- Docker - Thanks to Felix & co.
- Confusion if backports will be done #46521
- After merge direct people to related PRs
- Should I be a maintainer
- label:shipit PRs don't appear to always being getting quick sanity revie and merged
Objective: This is about scale and empowering others to do things themselves
A well functioning group should be able to:
- Welcome new members into the group
- Provide a variety of items (not just coding) for people to get involved with
- Keep on top of their backlog
- Set direction
Goal: Find out if we are building up and maintaining active groups.
aka If we don't measure, how do we know if we are improving
Interested in participation not just people idling
- Unique people active in IRC meetings
- Number of people active on agenda issues
- How do people find out about the groups
- Why do people stay
- Why do people leave
Goal: Make life easier
- Asking a wider range of people for pain points allows us to spot common issues and address them
- Review previous Contributor Summit docs
- Important to get input from new contributors
The various groups have found things that work for them, we should review, document and roll out for other groups what works. If something doesn't work then analyse why not
- AWS: Monthly status
- Simple to do
- Shows progress
- Motivates
- Network: track actions on agenda
- Network First Gamification Exercise - Earn Ansible Swag!
Goal: Showing progress keeps motivation
- Motivates existing and new people
- Such as AWS's boto3 porting and testing monthly stats
Goal: Ensure that new people that want to get involved have something to help with
- MUST include non-Python tasks
- MUST include some well defined simple items
On hold till till above items have been done, we don't want to invite more people till the groups are in a better state
-
Ansibullbot to promote IRC Working groups #820DONE - Standard slide and promotion to include in Meetups
On hold till till above items have been done
Series of blog posts, one per working group, showing what they've achieved and how to get involved.
Objective:
- Docs: dev_guide reorg to make it easier to create and write content (acozine is working on this)
- Docs: Real examples on how to document your module
- Docs: fix module checklist
- Docs: How to write a good integration tests
- Continue to spot common issues with new PRs and doc/automatically test them
(Will partly be addressed by Theme 2)
Where ever modules live (ansible/(ansible, modules-core, ...) there will always be issues and PRs raised. Understanding how the backlog builds up and empowering people to reduce it is key.
The strategy for for this is:
- Use Plan-Do-Check-Adjust
- Use quantitative measurements where possible to drive Plan-Do-Check-Adjust
- Make continual gradual improvements
- Break the PR workflow into individual stages and attack the individual stages
- PR Created
- ansibullbot A adds need_triage
- ansibullbot notifies maintainer(s)
- CI is run, PR status updated
- Member of Core does initial triage
- Main workflow
- The following may happen multiple times and in any order
- PR updated so CI is green
- Maintainers (or others) add review comments that need addressing
- Maintainers (or others) add
shipit
- ansibullbot adds
label:shipit
and
- ansibullbot potentially automerges based on rule set
- Person with Commit powers merges PR
Given the size of the Issue and PR backlog we use GitHub Labels to represent:
- What the issue/PR represents:
bug
,feature
- Code affected
new_module
,plugin/{action,callback,lookup,...}
, etc
Some of the key labels are:
-
needs_triage
Issue or PR has just been created and a member of the Core Team hasn't reviewed it yet. Triage is a very quick process -
bug
- Bug fix (PR) or report (issue) -
ci_verified
Identify pull requests for which CI failed -
feature
adds feature (PR) or feature request (issue) -
new_module
Identify pull requests adding new module support:core
support:network
support:certified
support:community
We also use labels for Working Groups (aws
, azure
, network
, windows
, etc)
See the almost full list of labels for more details
Aim
- How can measure the "new contributor experience" in a quantitative manor to allow us to identify bottlenecks in the process. We can then change part of the workflow and see the effect that has had.
Definitions
-
new contributors:
GitHub users that haven't had any PRs merged into ansible/ansible -
experience:
The workflow process that the contributor goes though from PR creation to PR being merged
We need to be able to track the change (positive or negative) that's occurred since the workflow was updated. There will not be one change in workflow, though a steady stream of improvements and tests. This means that the results need to be linked to a date (i.e. horizontal axis is date). FIXME What's the correct way of phrasing this?
All of the above multiplied by:
We expect (FIXME WHY) that different types of PRs would have different patterns/duration to progress through the workflow. Therefore we should track these individually as:
- The bottlenecks maybe specific to a certain type of PR
- The workflow fixes maybe specific to a certain type of PR
The rough matrix would be:
- Type: bugfix, feature
- Code type: Module, plugin_type (callback, lookup, inventory, etc)
- Support: Core, Network, Community
- SIG: If the PR has been tagged with a specific working group list of working groups (SIGs) - lower priority
Possible results and resolutions
We may find some trends that depend on the above matrix, such as:
- Features are merged quicker than bugfixes
- Is this because the features are net-new and couldn't cause regressions
- Are people naturally more interested by features than bug fixes
- Are their groups of bug fixes that need reviewing and merging as a group
- Are maintainers not being notified for all changes (ie non-module PRs are not being notified)
Dumping ground of other thoughts not directly related to another section:
- Number of
label:needs_triage
over time - Is Core keeping up with Triage
Via BOTMETA and a Module's author:
we have a reasonable idea of who to notify when an Issue or PR.
Before we add more maintainers we need to ensure that the existing process is work, ie that "pings" are being responded to.
- Review label:deprecated Check Bot logic (auto close feature PRs)?
-
label:docsite_pr
links -
label:docs
Review group - acozine is keeping stats - 2018-07-23 Theme 2: Ansibullbot to promote IRC Working groups #820
- 2018-08-22 Theme 4: Simplify issue templates
This Wiki is used for quick notes, not for support or documentation.
Working groups are now in the Ansible forum
Ansible project:
Community,
Contributor Experience,
Docs,
News,
Outreach,
RelEng,
Testing
Cloud:
AWS,
Azure,
CloudStack,
Container,
DigitalOcean,
Docker,
hcloud,
Kubernetes,
Linode,
OpenStack,
oVirt,
Virt,
VMware
Networking:
ACI,
AVI,
F5,
Meraki,
Network,
NXOS
Ansible Developer Tools:
Ansible-developer-tools
Software:
Crypto,
Foreman,
GDrive,
GitLab,
Grafana,
IPA,
JBoss,
MongoDB,
MySQL,
PostgreSQL,
RabbitMQ,
Zabbix
System:
AIX,
BSD,
HP-UX,
macOS,
Remote Management,
Solaris,
Windows
Security:
Security-Automation,
Lockdown
Tooling:
AWX,
Galaxy,
Molecule
Plugins:
httpapi