Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

request.WaiterAcceptor.Matcher does not work with JMESPath's comparison operators #2478

Closed
minamijoyo opened this issue Feb 28, 2019 · 7 comments
Labels
closed-for-staleness third-party This issue is related to third-party libraries or applications.

Comments

@minamijoyo
Copy link
Contributor

Version of AWS SDK for Go?

v1.17.6

Version of Go (go version)?

go version go1.11.2 darwin/amd64

What issue did you see?

I'm writing a custom WaitUntilXXX function for my operation tool and found that request.WaiterAcceptor.Matcher does not work with JMESPath's comparison operators (such as ==)
http://jmespath.org/specification.html#comparison-operators

This issue occurs if one operand is an attribute of AWS API response and the other is a literal value.

That is, we cannot use the filter expression something like this:

AutoScalingGroups[].[length(Instances[?LifecycleState=='InService'])][]

I wrote a unit test to clarify what I expect. See the attached at the end.

As you can see, the (* WaiterAcceptor) match function uses awsutil.ValuesAtPath.

vals, _ = awsutil.ValuesAtPath(req.Data, a.Argument)

It depends on the go-jmespath library which says fully compliant with the JMESPath specification.

result, err := jmespath.Search(path, i)

Therefore, the request.WaiterAcceptor.Matcher seems to support JMESPath expressions, but this does not work as expected, and I feel this counterintuitive.

Debugging for a while, I probably found the cause.
Because most of the AWS API response attributes in the aws-sdk-go are pointer references, I think that comparing a pointer of string with a string will never be equal.

If my understanding is correct, (* AutoScaling) WaitUntilGroupInServiceWithContex is probably affected by this issue and does not work correctly.

Matcher: request.PathWaiterMatch, Argument: "contains(AutoScalingGroups[].[length(Instances[?LifecycleState=='InService']) >= MinSize][], `false`)",

Interestingly, (* ECS) WaitUntilServicesStableWithContext does not seem to have this problem.

Matcher: request.PathWaiterMatch, Argument: "length(services[?!(length(deployments) == `1` && runningCount == desiredCount)]) == `0`",

I searched for related issues and found the followings:

As in #457's comment (jmespath/go-jmespath#15 (comment)),
this problem was fixed in jmespath/go-jmespath#15 , but it was reverted in jmespath/go-jmespath#20 .
It is also said that #457 was fixed in jmespath/go-jmespath#18 , but after that, it looks like jmespath/go-jmespath#15 has not been resolved yet.

However, these issues were closed 3 years ago and I'm not sure what is the current state.

Steps to reproduce

Save this code as main_test.go and run go test -v:

package main

import (
    "testing"

    "github.com/aws/aws-sdk-go/aws"
    "github.com/aws/aws-sdk-go/aws/awsutil"
    "github.com/aws/aws-sdk-go/service/autoscaling"
)

func TestWaitUntilGroupInService(t *testing.T) {
    mockResponse := &autoscaling.DescribeAutoScalingGroupsOutput{
        AutoScalingGroups: []*autoscaling.Group{
            &autoscaling.Group{
                Instances: []*autoscaling.Instance{
                    &autoscaling.Instance{
                        LifecycleState: aws.String("InService"),
                    },
                    &autoscaling.Instance{
                        LifecycleState: aws.String("InService"),
                    },
                },
                MaxSize:         aws.Int64(4),
                MinSize:         aws.Int64(2),
                DesiredCapacity: aws.Int64(2),
            },
            &autoscaling.Group{
                Instances: []*autoscaling.Instance{
                    &autoscaling.Instance{
                        LifecycleState: aws.String("InService"),
                    },
                    &autoscaling.Instance{
                        LifecycleState: aws.String("Pending"),
                    },
                },
                MaxSize:         aws.Int64(4),
                MinSize:         aws.Int64(2),
                DesiredCapacity: aws.Int64(2),
            },
        },
    }

    var testCases = []struct {
        expect []interface{}
        data   interface{}
        path   string
    }{
        {[]interface{}{true}, mockResponse, "contains(AutoScalingGroups[].[length(Instances[?LifecycleState=='InService']) >= MinSize][], `false`)"},
        {[]interface{}{true, false}, mockResponse, "AutoScalingGroups[].[length(Instances[?LifecycleState=='InService']) >= MinSize][]"},
        {[]interface{}{2, 1}, mockResponse, "AutoScalingGroups[].[length(Instances[?LifecycleState=='InService'])][]"},
    }
    for i, c := range testCases {
        v, err := awsutil.ValuesAtPath(c.data, c.path)
        if err != nil {
            t.Errorf("case %v, expected no error, %v", i, c.path)
        }
        if e, a := c.expect, v; !awsutil.DeepEqual(e, a) {

            t.Errorf("case %v, %v, expected: %#v, but got: %#v", i, c.path, e, a)
        }
    }
}
$ go get -u github.com/aws/aws-sdk-go

$ go test -v
=== RUN   TestWaitUntilGroupInService
--- FAIL: TestWaitUntilGroupInService (0.00s)
    main_test.go:59: case 0, contains(AutoScalingGroups[].[length(Instances[?LifecycleState=='InService']) >= MinSize][], `false`), expected: []interface {}{true}, but got: []interface {}{false}
    main_test.go:59: case 1, AutoScalingGroups[].[length(Instances[?LifecycleState=='InService']) >= MinSize][], expected: []interface {}{true, false}, but got: []interface {}{}
    main_test.go:59: case 2, AutoScalingGroups[].[length(Instances[?LifecycleState=='InService'])][], expected: []interface {}{2, 1}, but got: []interface {}{0, 0}
FAIL
exit status 1
FAIL    _/Users/masayuki-morita/work/tmp/20190228       0.019s

Thanks!

@diehlaws diehlaws self-assigned this Mar 1, 2019
@diehlaws diehlaws added the investigating This issue is being investigated and/or work is in progress to resolve the issue. label Mar 1, 2019
@diehlaws
Copy link
Contributor

diehlaws commented Mar 1, 2019

Hi @minamijoyo, thanks for reaching out to us about this and for the extensive level of detail provided regarding this behavior. I'm looking into this and will update again once I have more information. Please feel free to reply with any additional information you come across on this in the meantime.

@diehlaws diehlaws added third-party This issue is related to third-party libraries or applications. 3rd-party and removed third-party This issue is related to third-party libraries or applications. labels Mar 8, 2019
@diehlaws diehlaws removed the investigating This issue is being investigated and/or work is in progress to resolve the issue. label Mar 8, 2019
@diehlaws
Copy link
Contributor

diehlaws commented Mar 8, 2019

Thank you for your patience in this matter @minamijoyo. As you've mentioned, this behavior stems from go-jmespath attempting to evaluate pointer values against literal values; I've opened jmespath/go-jmespath#40 to address this.

@minamijoyo
Copy link
Contributor Author

minamijoyo commented Mar 12, 2019

@diehlaws Thank you for working on this and opening the issue on upstream.
I'm not familiar with go-jmespath and reflect metaprogramming and I'm not sure why the past attempt was reverted. So, I will wait for a while using WaitUntilUpstreamMaintainersActive() 😄

@tiffanyfay
Copy link

tiffanyfay commented Jul 30, 2019

@diehlaws I see jmespath/go-jmespath#30, which seemed related, but it seems to be a different issue since it isn't fixed. Do you know?

@diehlaws diehlaws added third-party This issue is related to third-party libraries or applications. and removed 3rd-party labels Feb 25, 2020
@diehlaws diehlaws removed their assignment Aug 26, 2020
@github-actions
Copy link

We have noticed this issue has not received attention in 1 year. We will close this issue for now. If you think this is in error, please feel free to comment and reopen the issue.

@github-actions github-actions bot added the closing-soon This issue will automatically close in 4 days unless further comments are made. label Mar 20, 2022
@isobit
Copy link

isobit commented Mar 20, 2022

I recently ran into this issue while trying to modify the acceptors for ecs.WaitUntilServiceStable; ended up just writing a custom implementation because there was no way to get this to work:

Matcher: request.PathWaiterMatch, Argument: "length(services[?!(runningCount >= desiredCount && length(deployments[?(status == 'PRIMARY' && runningCount >= desiredCount)]) == `1`)]) == `0`",

@github-actions github-actions bot removed the closing-soon This issue will automatically close in 4 days unless further comments are made. label Mar 21, 2022
@github-actions
Copy link

We have noticed this issue has not received attention in 1 year. We will close this issue for now. If you think this is in error, please feel free to comment and reopen the issue.

@github-actions github-actions bot added closing-soon This issue will automatically close in 4 days unless further comments are made. closed-for-staleness and removed closing-soon This issue will automatically close in 4 days unless further comments are made. labels Mar 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
closed-for-staleness third-party This issue is related to third-party libraries or applications.
Projects
None yet
Development

No branches or pull requests

5 participants