Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Statefulset is looping for "size -1" number of times #82

Open
vigneshp826 opened this issue Jul 3, 2024 · 2 comments
Open

Statefulset is looping for "size -1" number of times #82

vigneshp826 opened this issue Jul 3, 2024 · 2 comments

Comments

@vigneshp826
Copy link

vigneshp826 commented Jul 3, 2024

Type of question

General operator-related help

Question

What did you do?

I have playbook which creates statefulset for every CR creation against my operator (for simplicity, removed all other k8 resource creation stuffs). The playbook is getting executed repeatedly as per the replica size upon the CR creation. For example, when the size of the replica is 3, the playbook is executing 2 times.

Setting "watchDependentResources: False", stops this pattern and executes only one time but it is not a better solution in my use case.

What did you expect to see?

Plays should run only once to avoid wasting resource before reconciliation period kicks in.

What did you see instead? Under which circumstances?

Multiple execution of K8 resource creation based on the status output of each loop.

Environment

Operator type:

/language ansible

$operator-sdk version
operator-sdk version: "v1.31.0", commit: "e67da35ef4fff3e471a208904b2a142b27ae32b1", kubernetes version: "1.26.0", go version: "go1.19.11", GOOS: "linux", GOARCH: "amd64"

$ kubectl version

Client Version: v1.28.1 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.25.4

Additional context

Playbook

  hosts: localhost
  gather_facts: false
  tags: always
  connection: local

  tasks:

    - name: Create StatefulSet
      kubernetes.core.k8s:
        state: present
        definition:
          apiVersion: apps/v1
          kind: StatefulSet
          metadata:
            name: my-statefulset
            namespace: nginx
            annotations:
              "ansible.sdk.operatorframework.io/verbosity": "5"
          spec:
            serviceName: "nginx"
            replicas: "{{ size}}"
            selector:
              matchLabels:
                app: nginx
            template:
              metadata:
                labels:
                  app: nginx
              spec:
                containers:
                - name: nginx
                  image: nginx:1.14.2
                  ports:
                  - containerPort: 80
                    name: web
            volumeClaimTemplates:
            - metadata:
                name: my-storage
              spec:
                accessModes: [ "ReadWriteOnce" ]
                resources:
                  requests:
                    storage: 1Gi
                volumeMode: Filesystem
                storageClassName: ebs-sc
      register: create_statefulset
      until: create_statefulset.result.status.replicas is defined and create_statefulset.result.status.replicas == {{ size }}
      retries: 30
      delay: 10 
Copy link

openshift-ci bot commented Jul 3, 2024

@vigneshp826: The label(s) language/ansible cannot be applied, because the repository doesn't have them.

In response to this:

Type of question

General operator-related help

Question

What did you do?

I have playbook which creates statefulset for every CR creation against my operator (for simplicity, removed all other k8 resource creation stuffs). The playbook is getting executed repeatedly as per the replica size upon the CR creation. For example, when the size of the replica is 3, the playbook is executing 2 times.

Setting "watchDependentResources: False", stops this pattern and executes only one time but it is not a better solution in my use case.

What did you expect to see?

Plays should run only once to avoid wasting resource before reconciliation period kicks in.

What did you see instead? Under which circumstances?

Multiple execution of K8 resource creation based on the status output of each loop.

Environment

Operator type:

/language ansible

$operator-sdk version
operator-sdk version: "v1.31.0", commit: "e67da35ef4fff3e471a208904b2a142b27ae32b1", kubernetes version: "1.26.0", go version: "go1.19.11", GOOS: "linux", GOARCH: "amd64"

$ kubectl version

Client Version: v1.28.1 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.25.4

Additional context

Playbook

 hosts: localhost
 gather_facts: false
 tags: always
 connection: local

 tasks:

   - name: Create StatefulSet
     kubernetes.core.k8s:
       state: present
       definition:
         apiVersion: apps/v1
         kind: StatefulSet
         metadata:
           name: my-statefulset
           namespace: nginx
           annotations:
             "ansible.sdk.operatorframework.io/verbosity": "5"
         spec:
           serviceName: "nginx"
           replicas: "{{ size}}"
           selector:
             matchLabels:
               app: nginx
           template:
             metadata:
               labels:
                 app: nginx
             spec:
               containers:
               - name: nginx
                 image: nginx:1.14.2
                 ports:
                 - containerPort: 80
                   name: web
           volumeClaimTemplates:
           - metadata:
               name: my-storage
             spec:
               accessModes: [ "ReadWriteOnce" ]
               resources:
                 requests:
                   storage: 1Gi
               volumeMode: Filesystem
               storageClassName: ebs-sc
     register: create_statefulset
     until: create_statefulset.result.status.replicas is defined and create_statefulset.result.status.replicas == {{ size }}
     retries: 30
     delay: 10 ```

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@vigneshp826
Copy link
Author

@everettraven sorry for tagging you, but appreciate anyone help here to move forward with my production rollout.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant