Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update observer to collect CPU utilization data #616

Merged
merged 8 commits into from
Jul 6, 2023

Conversation

blt
Copy link
Collaborator

@blt blt commented Jul 5, 2023

What does this PR do?

This commit updates the observer with the aim of calculating CPU utilization. We maintain output of user and kernel space CPU seconds but add utilization with the understanding that this is useful to users that need to consider utilization and not ticks.

Additional Notes

Note, if there is CPU use in the system that is not in a parent, child relationship but should still be counted this change will not address that.

Related issues

REF SMP-613
#613

@blt blt requested a review from a team July 5, 2023 18:53
This commit updates the observer with the aim of calculating CPU
utilization. We maintain output of user and kernel space CPU seconds
but add utilization with the understanding that this is useful to users
that need to consider utilization and not ticks.

Note, if there is CPU use in the system that is not in a parent, child
relationship but should still be counted this change will not address
that.

REF SMP-612

Signed-off-by: Brian L. Troutwine <[email protected]>
@blt blt force-pushed the observer_calculate_utilization branch from 9cfef5b to a6e3e6a Compare July 5, 2023 18:59
@github-actions
Copy link

github-actions bot commented Jul 5, 2023

Regression Detector Results

Run ID: 621c321e-974b-40cb-993a-235632f9479d
Baseline: 155fcd5
Comparison: a6e3e6a
Total lading-target CPUs: 4

Explanation

A regression test is an integrated performance test for lading-target in a repeatable rig, with varying configuration for lading-target. What follows is a statistical summary of a brief lading-target run for each configuration across SHAs given above. The goal of these tests are to determine quickly if lading-target performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
apache_common_http_both_directions_this_doesnt_make_sense ingress throughput -0.20 [-0.22, -0.18] 100.00%
blackhole_from_apache_common_http ingress throughput -0.29 [-0.34, -0.24] 100.00%

src/observer.rs Outdated Show resolved Hide resolved
src/observer.rs Outdated Show resolved Hide resolved
src/observer.rs Outdated
let ticks_per_second: f64 = procfs::ticks_per_second() as f64;
let limits = process.limits().map_err(Error::ProcError)?;
// NOTE units on the CPU limits are 'CPU / second'
let max_cpu_time: Limit = limits.max_cpu_time;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this value come from the RLIMIT_CPU field of the struct rlimit queried by the getrlimit syscall (man page)? I'm wondering if the units of this value are seconds or CPU*seconds.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's read out of /proc/pid/limits, see https://docs.rs/procfs/0.15.1/src/procfs/process/limit.rs.html. So yeah, RLIMIT_CPU. "This is a limit, in seconds, on the amount of CPU time that the process can consume." Which I think would be, in the notation I'm using here, CPU units per second.

Signed-off-by: Brian L. Troutwine <[email protected]>
src/observer.rs Outdated
Comment on lines 198 to 203
let kernel_utilization_soft = (kernel_time_seconds_diff / process_uptime_seconds_diff) / soft_cpu_limit;
let kernel_utilization_hard = (kernel_time_seconds_diff / process_uptime_seconds_diff) / hard_cpu_limit;
let user_utilization_soft = (user_time_seconds_diff / process_uptime_seconds_diff) / soft_cpu_limit;
let user_utilization_hard = (user_time_seconds_diff / process_uptime_seconds_diff) / hard_cpu_limit;
let utilization_soft = (time_seconds_diff / process_uptime_seconds_diff) / soft_cpu_limit;
let utilization_hard = (time_seconds_diff / process_uptime_seconds_diff) / hard_cpu_limit;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Related to my comment above asking about the units of max_cpu_time above, I'm wondering if the division by process_uptime_seconds_diff is necessary in this group of calculations.

@github-actions
Copy link

github-actions bot commented Jul 5, 2023

Regression Detector Results

Run ID: d0fa7a88-b486-49c4-ae6a-d75fbab014d8
Baseline: 155fcd5
Comparison: b4723bc
Total lading-target CPUs: 4

Explanation

A regression test is an integrated performance test for lading-target in a repeatable rig, with varying configuration for lading-target. What follows is a statistical summary of a brief lading-target run for each configuration across SHAs given above. The goal of these tests are to determine quickly if lading-target performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
blackhole_from_apache_common_http ingress throughput +0.21 [+0.16, +0.27] 100.00%
apache_common_http_both_directions_this_doesnt_make_sense ingress throughput -0.11 [-0.13, -0.09] 100.00%

src/observer.rs Outdated Show resolved Hide resolved
blt added 2 commits July 5, 2023 21:23
Signed-off-by: Brian L. Troutwine <[email protected]>
Signed-off-by: Brian L. Troutwine <[email protected]>
@github-actions
Copy link

github-actions bot commented Jul 5, 2023

Regression Detector Results

Run ID: a099ac78-7638-40c1-9f86-18b139ec50e4
Baseline: 155fcd5
Comparison: ac005df
Total lading-target CPUs: 4

Explanation

A regression test is an integrated performance test for lading-target in a repeatable rig, with varying configuration for lading-target. What follows is a statistical summary of a brief lading-target run for each configuration across SHAs given above. The goal of these tests are to determine quickly if lading-target performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
blackhole_from_apache_common_http ingress throughput +0.20 [+0.15, +0.25] 100.00%
apache_common_http_both_directions_this_doesnt_make_sense ingress throughput -0.34 [-0.36, -0.32] 100.00%

@github-actions
Copy link

github-actions bot commented Jul 5, 2023

Regression Detector Results

Run ID: c6f15f7f-ff55-4377-b9b1-f8057975b93d
Baseline: 155fcd5
Comparison: 67af413
Total lading-target CPUs: 4

Explanation

A regression test is an integrated performance test for lading-target in a repeatable rig, with varying configuration for lading-target. What follows is a statistical summary of a brief lading-target run for each configuration across SHAs given above. The goal of these tests are to determine quickly if lading-target performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
blackhole_from_apache_common_http ingress throughput +0.12 [+0.07, +0.17] 99.69%
apache_common_http_both_directions_this_doesnt_make_sense ingress throughput -0.16 [-0.18, -0.14] 100.00%

@blt blt requested a review from goxberry July 6, 2023 00:25
Signed-off-by: Brian L. Troutwine <[email protected]>
@github-actions
Copy link

github-actions bot commented Jul 6, 2023

Regression Detector Results

Run ID: b478c7f5-4935-4bd5-a937-232e474eef65
Baseline: 155fcd5
Comparison: 9a28083
Total lading-target CPUs: 4

Explanation

A regression test is an integrated performance test for lading-target in a repeatable rig, with varying configuration for lading-target. What follows is a statistical summary of a brief lading-target run for each configuration across SHAs given above. The goal of these tests are to determine quickly if lading-target performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
blackhole_from_apache_common_http ingress throughput -0.09 [-0.15, -0.04] 98.43%
apache_common_http_both_directions_this_doesnt_make_sense ingress throughput -0.10 [-0.12, -0.07] 100.00%

@github-actions
Copy link

github-actions bot commented Jul 6, 2023

Regression Detector Results

Run ID: f3826227-606b-4041-bbd1-42eccb33229d
Baseline: 155fcd5
Comparison: ee286f3
Total lading-target CPUs: 4

Explanation

A regression test is an integrated performance test for lading-target in a repeatable rig, with varying configuration for lading-target. What follows is a statistical summary of a brief lading-target run for each configuration across SHAs given above. The goal of these tests are to determine quickly if lading-target performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
blackhole_from_apache_common_http ingress throughput -0.08 [-0.13, -0.03] 94.84%
apache_common_http_both_directions_this_doesnt_make_sense ingress throughput -0.27 [-0.29, -0.25] 100.00%

Signed-off-by: Brian L. Troutwine <[email protected]>
@github-actions
Copy link

github-actions bot commented Jul 6, 2023

Regression Detector Results

Run ID: 88f9d812-e19d-4a37-b7dd-9eeb23014a7c
Baseline: 155fcd5
Comparison: 914a20d
Total lading-target CPUs: 4

Explanation

A regression test is an integrated performance test for lading-target in a repeatable rig, with varying configuration for lading-target. What follows is a statistical summary of a brief lading-target run for each configuration across SHAs given above. The goal of these tests are to determine quickly if lading-target performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
apache_common_http_both_directions_this_doesnt_make_sense ingress throughput +0.01 [-0.01, +0.03] 42.97%
blackhole_from_apache_common_http ingress throughput +0.00 [-0.05, +0.05] 5.68%

blt added a commit that referenced this pull request Jul 6, 2023
We have found in practice that many of our users set their
throttle to 'stable'. While predictive throttle is valuable and we
wish to retain it this being the default is often confusing.

This change is technically breaking. Should be merged behind
PR #616 to appear in the changelog.

Signed-off-by: Brian L. Troutwine <[email protected]>
@blt blt merged commit c06adaf into main Jul 6, 2023
26 checks passed
@blt blt deleted the observer_calculate_utilization branch July 6, 2023 18:34
blt added a commit that referenced this pull request Jul 6, 2023
We have found in practice that many of our users set their
throttle to 'stable'. While predictive throttle is valuable and we
wish to retain it this being the default is often confusing.

This change is technically breaking. Should be merged behind
PR #616 to appear in the changelog.

Signed-off-by: Brian L. Troutwine <[email protected]>
blt added a commit that referenced this pull request Jul 6, 2023
* Adjust the default throttle to 'stable'

We have found in practice that many of our users set their
throttle to 'stable'. While predictive throttle is valuable and we
wish to retain it this being the default is often confusing.

This change is technically breaking. Should be merged behind
PR #616 to appear in the changelog.

Signed-off-by: Brian L. Troutwine <[email protected]>

* Update changelog, propose 0.17.0

Signed-off-by: Brian L. Troutwine <[email protected]>

---------

Signed-off-by: Brian L. Troutwine <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants