Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Intel E823C NICs #520

Merged
merged 3 commits into from
Oct 20, 2023

Conversation

murali509
Copy link
Contributor

We use Kontron ME1310 Edge Servers which has built in E823C NICs. We create VFs using these NICs. We are able to create SriovNetworkNodePolicy and create the VFs after adding it under supported-nic-ids config map

We use Kontron ME1310 Edge Servers which has built in E823C NICs. We create VFs using these NICs. We are able to create SriovNetworkNodePolicy and create the VFs after adding it under supported-nic-ids config map
@github-actions
Copy link

Thanks for your PR,
To run vendors CIs use one of:

  • /test-all: To run all tests for all vendors.
  • /test-e2e-all: To run all E2E tests for all vendors.
  • /test-e2e-nvidia-all: To run all E2E tests for NVIDIA vendor.

To skip the vendors CIs use one of:

  • /skip-all: To skip all tests for all vendors.
  • /skip-e2e-all: To skip all E2E tests for all vendors.
  • /skip-e2e-nvidia-all: To skip all E2E tests for NVIDIA vendor.
    Best regards.

@SchSeba
Copy link
Collaborator

SchSeba commented Oct 17, 2023

@Eoghan1232 does intel support this nic in the sriov operator?

@Eoghan1232
Copy link
Collaborator

@Eoghan1232 does intel support this nic in the sriov operator?

let me check, I've never used this specific nic before, but I can see it being used in DPDK and other docs.

if @murali509 confirms the functionality and fixes the PR, I see no reason to object.

@github-actions
Copy link

Thanks for your PR,
To run vendors CIs use one of:

  • /test-all: To run all tests for all vendors.
  • /test-e2e-all: To run all E2E tests for all vendors.
  • /test-e2e-nvidia-all: To run all E2E tests for NVIDIA vendor.

To skip the vendors CIs use one of:

  • /skip-all: To skip all tests for all vendors.
  • /skip-e2e-all: To skip all E2E tests for all vendors.
  • /skip-e2e-nvidia-all: To skip all E2E tests for NVIDIA vendor.
    Best regards.

@github-actions
Copy link

Thanks for your PR,
To run vendors CIs use one of:

  • /test-all: To run all tests for all vendors.
  • /test-e2e-all: To run all E2E tests for all vendors.
  • /test-e2e-nvidia-all: To run all E2E tests for NVIDIA vendor.

To skip the vendors CIs use one of:

  • /skip-all: To skip all tests for all vendors.
  • /skip-e2e-all: To skip all E2E tests for all vendors.
  • /skip-e2e-nvidia-all: To skip all E2E tests for NVIDIA vendor.
    Best regards.

@murali509
Copy link
Contributor Author

Hi @Eoghan1232, we have tested sriov functionality on this Intel E823-C NIC in Kontron ME1310 servers and its working fine.

@murali509
Copy link
Contributor Author

image-20231013-081940 image-20231013-082144 image-20231013-082731

@murali509
Copy link
Contributor Author

image

@murali509
Copy link
Contributor Author

kubectl -n gke-operators get SriovNetworkNodeState dauk-mrl-k-gdcv-host02.denseair.net -o yaml
apiVersion: sriovnetwork.k8s.cni.cncf.io/v1
kind: SriovNetworkNodeState
metadata:
creationTimestamp: "2023-10-13T16:10:13Z"
generation: 4
name: dauk-mrl-k-gdcv-host02.denseair.net
namespace: gke-operators
ownerReferences:

  • apiVersion: sriovnetwork.k8s.cni.cncf.io/v1
    blockOwnerDeletion: true
    controller: true
    kind: SriovNetworkNodePolicy
    name: default
    uid: b52c7683-e98b-4020-8fd8-b86037bdabb9
    resourceVersion: "2784256"
    uid: c4cfd65d-70f3-4578-914c-f2d43380fccb
    spec:
    dpConfigVersion: "2784172"
    interfaces:
  • mtu: 1500
    name: eno1
    numVfs: 2
    pciAddress: 0000:89:00.3
    vfGroups:
    • deviceType: netdevice
      mtu: 1500
      policyName: sriov-network-node-policy-eno1
      resourceName: eno1
      vfRange: 0-1
  • mtu: 1500
    name: enp145s0f0
    numVfs: 2
    pciAddress: 0000:91:00.0
    vfGroups:
    • deviceType: netdevice
      mtu: 1500
      policyName: sriov-network-node-policy-enp145s0f0
      resourceName: enp145s0f0
      vfRange: 0-1
  • mtu: 1500
    name: enp145s0f1
    numVfs: 2
    pciAddress: 0000:91:00.1
    vfGroups:
    • deviceType: netdevice
      mtu: 1500
      policyName: sriov-network-node-policy-enp145s0f1
      resourceName: enp145s0f1
      vfRange: 0-1
  • mtu: 1500
    name: enp22s0f0
    numVfs: 2
    pciAddress: "0000:16:00.0"
    vfGroups:
    • deviceType: netdevice
      mtu: 1500
      policyName: sriov-network-node-policy-enp22s0f0
      resourceName: enp22s0f0
      vfRange: 0-1
  • mtu: 1500
    name: enp22s0f1
    numVfs: 2
    pciAddress: "0000:16:00.1"
    vfGroups:
    • deviceType: netdevice
      mtu: 1500
      policyName: sriov-network-node-policy-enp22s0f1
      resourceName: enp22s0f1
      vfRange: 0-1
      status:
      interfaces:
  • deviceID: "1533"
    driver: igb
    linkSpeed: 1000 Mb/s
    linkType: ETH
    mac: 00:a0:a5:e3:e3:8d
    mtu: 1500
    name: eno5
    pciAddress: "0000:05:00.0"
    vendor: "8086"
  • Vfs:
    • deviceID: "1889"
      driver: iavf
      mac: fa:49:be:21:5e:2f
      mtu: 1500
      name: enp22s0f0v0
      pciAddress: "0000:16:01.0"
      vendor: "8086"
      vfID: 0
    • deviceID: "1889"
      driver: iavf
      mac: 3e:8f:a4:ad:75:24
      mtu: 1500
      name: enp22s0f0v1
      pciAddress: "0000:16:01.1"
      vendor: "8086"
      vfID: 1
      deviceID: 159b
      driver: ice
      eSwitchMode: legacy
      linkSpeed: 10000 Mb/s
      linkType: ETH
      mac: b4:83:51:06:b1:a8
      mtu: 1500
      name: enp22s0f0
      numVfs: 2
      pciAddress: "0000:16:00.0"
      totalvfs: 128
      vendor: "8086"
  • Vfs:
    • deviceID: "1889"
      driver: iavf
      mac: 12:9e:9d:b6:ff:c4
      mtu: 1500
      name: enp22s0f1v0
      pciAddress: "0000:16:11.0"
      vendor: "8086"
      vfID: 0
    • deviceID: "1889"
      driver: iavf
      mac: 82:31:82:f2:b8:bd
      mtu: 1500
      name: enp22s0f1v1
      pciAddress: "0000:16:11.1"
      vendor: "8086"
      vfID: 1
      deviceID: 159b
      driver: ice
      eSwitchMode: legacy
      linkSpeed: 10000 Mb/s
      linkType: ETH
      mac: b4:83:51:06:b1:a9
      mtu: 1500
      name: enp22s0f1
      numVfs: 2
      pciAddress: "0000:16:00.1"
      totalvfs: 128
      vendor: "8086"
  • deviceID: 188a
    driver: ice
    eSwitchMode: legacy
    linkSpeed: 10000 Mb/s
    linkType: ETH
    mac: 00:a0:a5:e3:e3:8c
    mtu: 1500
    name: eno4
    pciAddress: 0000:89:00.0
    totalvfs: 64
    vendor: "8086"
  • deviceID: 188a
    driver: ice
    eSwitchMode: legacy
    linkSpeed: 10000 Mb/s
    linkType: ETH
    mac: 00:a0:a5:e3:e3:8b
    mtu: 1500
    name: eno3
    pciAddress: 0000:89:00.1
    totalvfs: 64
    vendor: "8086"
  • deviceID: 188a
    driver: ice
    eSwitchMode: legacy
    linkSpeed: 10000 Mb/s
    linkType: ETH
    mac: 00:a0:a5:e3:e3:8a
    mtu: 1500
    name: eno2
    pciAddress: 0000:89:00.2
    totalvfs: 64
    vendor: "8086"
  • Vfs:
    • deviceID: "1889"
      driver: iavf
      mac: 96:91:da:da:4f:76
      mtu: 1500
      name: eno1v0
      pciAddress: 0000:89:19.0
      vendor: "8086"
      vfID: 0
    • deviceID: "1889"
      driver: iavf
      mac: b2:0f:94:f0:82:99
      mtu: 1500
      name: eno1v1
      pciAddress: 0000:89:19.1
      vendor: "8086"
      vfID: 1
      deviceID: 188a
      driver: ice
      eSwitchMode: legacy
      linkSpeed: 10000 Mb/s
      linkType: ETH
      mac: 00:a0:a5:e3:e3:89
      mtu: 1500
      name: eno1
      numVfs: 2
      pciAddress: 0000:89:00.3
      totalvfs: 64
      vendor: "8086"
  • Vfs:
    • deviceID: "1889"
      driver: iavf
      mac: ae:ea:e8:f7:19:dd
      mtu: 1500
      name: enp145s0f0v0
      pciAddress: 0000:91:01.0
      vendor: "8086"
      vfID: 0
    • deviceID: "1889"
      driver: iavf
      mac: 36:42:6d:a5:f5:ee
      mtu: 1500
      name: enp145s0f0v1
      pciAddress: 0000:91:01.1
      vendor: "8086"
      vfID: 1
      deviceID: 159b
      driver: ice
      eSwitchMode: legacy
      linkSpeed: 10000 Mb/s
      linkType: ETH
      mac: b4:83:51:06:b2:18
      mtu: 1500
      name: enp145s0f0
      numVfs: 2
      pciAddress: 0000:91:00.0
      totalvfs: 128
      vendor: "8086"
  • Vfs:
    • deviceID: "1889"
      driver: iavf
      mac: 4a:fe:56:ef:68:f4
      mtu: 1500
      name: enp145s0f1v0
      pciAddress: 0000:91:11.0
      vendor: "8086"
      vfID: 0
    • deviceID: "1889"
      driver: iavf
      mac: d2:85:3e:db:c6:1c
      mtu: 1500
      name: enp145s0f1v1
      pciAddress: 0000:91:11.1
      vendor: "8086"
      vfID: 1
      deviceID: 159b
      driver: ice
      eSwitchMode: legacy
      linkSpeed: 10000 Mb/s
      linkType: ETH
      mac: b4:83:51:06:b2:19
      mtu: 1500
      name: enp145s0f1
      numVfs: 2
      pciAddress: 0000:91:00.1
      totalvfs: 128
      vendor: "8086"
      syncStatus: Succeeded

@coveralls
Copy link

Pull Request Test Coverage Report for Build 6549752628

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • 16 unchanged lines in 4 files lost coverage.
  • Overall coverage decreased (-0.1%) to 25.187%

Files with Coverage Reduction New Missed Lines %
pkg/apply/apply.go 2 74.29%
api/v1/helper.go 3 42.04%
controllers/sriovnetwork_controller.go 4 70.68%
controllers/sriovoperatorconfig_controller.go 7 53.44%
Totals Coverage Status
Change from base Build 6530802222: -0.1%
Covered Lines: 2258
Relevant Lines: 8965

💛 - Coveralls

@murali509
Copy link
Contributor Author

@SchSeba @Eoghan1232 , I don't have idea why some check failed in OCP. I don't think the code change is related to the failed test in ocp. Could you help to let me know what can be done further?

Copy link
Collaborator

@Eoghan1232 Eoghan1232 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm okay with these changes, @murali509 can you confirm you tested binding the VF's to a DPDK driver too? I see you marked it as capable in the PR.

@SchSeba

@murali509
Copy link
Contributor Author

binding the VF's to a DPDK driver

Yes I have tested binding the VFs to a DPDK driver as you can see the output for eno2 NIC (E823C) below with vfio-pci type:

kubectl -n gke-operators get SriovNetworkNodeState dauk-mrl-k-gdcv-host02.denseair.net -o yaml
apiVersion: sriovnetwork.k8s.cni.cncf.io/v1
kind: SriovNetworkNodeState
metadata:
creationTimestamp: "2023-10-13T16:10:13Z"
generation: 5
name: dauk-mrl-k-gdcv-host02.denseair.net
namespace: gke-operators
ownerReferences:

  • apiVersion: sriovnetwork.k8s.cni.cncf.io/v1
    blockOwnerDeletion: true
    controller: true
    kind: SriovNetworkNodePolicy
    name: default
    uid: b52c7683-e98b-4020-8fd8-b86037bdabb9
    resourceVersion: "4059558"
    uid: c4cfd65d-70f3-4578-914c-f2d43380fccb
    spec:
    dpConfigVersion: "4059478"
    interfaces:
  • mtu: 1500
    name: eno2
    numVfs: 6
    pciAddress: 0000:89:00.2
    vfGroups:
    • deviceType: vfio-pci
      mtu: 1500
      policyName: sriov-network-node-policy-dpdk-ngu
      resourceName: intel_sriov_dpdk_ngu
      vfRange: 0-1
  • mtu: 1500
    name: eno1
    numVfs: 2
    pciAddress: 0000:89:00.3
    vfGroups:
    • deviceType: netdevice
      mtu: 1500
      policyName: sriov-network-node-policy-eno1
      resourceName: eno1
      vfRange: 0-1
  • mtu: 1500
    name: enp145s0f0
    numVfs: 2
    pciAddress: 0000:91:00.0
    vfGroups:
    • deviceType: netdevice
      mtu: 1500
      policyName: sriov-network-node-policy-enp145s0f0
      resourceName: enp145s0f0
      vfRange: 0-1
  • mtu: 1500
    name: enp145s0f1
    numVfs: 2
    pciAddress: 0000:91:00.1
    vfGroups:
    • deviceType: netdevice
      mtu: 1500
      policyName: sriov-network-node-policy-enp145s0f1
      resourceName: enp145s0f1
      vfRange: 0-1
  • mtu: 1500
    name: enp22s0f0
    numVfs: 2
    pciAddress: "0000:16:00.0"
    vfGroups:
    • deviceType: netdevice
      mtu: 1500
      policyName: sriov-network-node-policy-enp22s0f0
      resourceName: enp22s0f0
      vfRange: 0-1
  • mtu: 1500
    name: enp22s0f1
    numVfs: 2
    pciAddress: "0000:16:00.1"
    vfGroups:
    • deviceType: netdevice
      mtu: 1500
      policyName: sriov-network-node-policy-enp22s0f1
      resourceName: enp22s0f1
      vfRange: 0-1
      status:
      interfaces:
  • deviceID: "1533"
    driver: igb
    linkSpeed: 1000 Mb/s
    linkType: ETH
    mac: 00:a0:a5:e3:e3:8d
    mtu: 1500
    name: eno5
    pciAddress: "0000:05:00.0"
    vendor: "8086"
  • Vfs:
    • deviceID: "1889"
      driver: iavf
      mac: fa:49:be:21:5e:2f
      mtu: 1500
      name: enp22s0f0v0
      pciAddress: "0000:16:01.0"
      vendor: "8086"
      vfID: 0
    • deviceID: "1889"
      driver: iavf
      mac: 3e:8f:a4:ad:75:24
      mtu: 1500
      name: enp22s0f0v1
      pciAddress: "0000:16:01.1"
      vendor: "8086"
      vfID: 1
      deviceID: 159b
      driver: ice
      eSwitchMode: legacy
      linkSpeed: 10000 Mb/s
      linkType: ETH
      mac: b4:83:51:06:b1:a8
      mtu: 1500
      name: enp22s0f0
      numVfs: 2
      pciAddress: "0000:16:00.0"
      totalvfs: 128
      vendor: "8086"
  • Vfs:
    • deviceID: "1889"
      driver: iavf
      mac: 12:9e:9d:b6:ff:c4
      mtu: 1500
      name: enp22s0f1v0
      pciAddress: "0000:16:11.0"
      vendor: "8086"
      vfID: 0
    • deviceID: "1889"
      driver: iavf
      mac: 82:31:82:f2:b8:bd
      mtu: 1500
      name: enp22s0f1v1
      pciAddress: "0000:16:11.1"
      vendor: "8086"
      vfID: 1
      deviceID: 159b
      driver: ice
      eSwitchMode: legacy
      linkSpeed: 10000 Mb/s
      linkType: ETH
      mac: b4:83:51:06:b1:a9
      mtu: 1500
      name: enp22s0f1
      numVfs: 2
      pciAddress: "0000:16:00.1"
      totalvfs: 128
      vendor: "8086"
  • deviceID: 188a
    driver: ice
    eSwitchMode: legacy
    linkSpeed: 10000 Mb/s
    linkType: ETH
    mac: 00:a0:a5:e3:e3:8c
    mtu: 1500
    name: eno4
    pciAddress: 0000:89:00.0
    totalvfs: 64
    vendor: "8086"
  • deviceID: 188a
    driver: ice
    eSwitchMode: legacy
    linkSpeed: 10000 Mb/s
    linkType: ETH
    mac: 00:a0:a5:e3:e3:8b
    mtu: 1500
    name: eno3
    pciAddress: 0000:89:00.1
    totalvfs: 64
    vendor: "8086"
  • Vfs:
    • deviceID: "1889"
      driver: vfio-pci
      pciAddress: 0000:89:11.0
      vendor: "8086"
      vfID: 0
    • deviceID: "1889"
      driver: vfio-pci
      pciAddress: 0000:89:11.1
      vendor: "8086"
      vfID: 1
    • deviceID: "1889"
      driver: iavf
      mac: d6:3d:7a:42:37:d7
      mtu: 1500
      name: eno2v2
      pciAddress: 0000:89:11.2
      vendor: "8086"
      vfID: 2
    • deviceID: "1889"
      driver: iavf
      mac: 52:d8:0f:97:d6:88
      mtu: 1500
      name: eno2v3
      pciAddress: 0000:89:11.3
      vendor: "8086"
      vfID: 3
    • deviceID: "1889"
      driver: iavf
      mac: 52:b1:51:e9:8a:e5
      mtu: 1500
      name: eno2v4
      pciAddress: 0000:89:11.4
      vendor: "8086"
      vfID: 4
    • deviceID: "1889"
      driver: iavf
      mac: 52:85:8b:89:88:8c
      mtu: 1500
      name: eno2v5
      pciAddress: 0000:89:11.5
      vendor: "8086"
      vfID: 5
      deviceID: 188a
      driver: ice
      eSwitchMode: legacy
      linkSpeed: 10000 Mb/s
      linkType: ETH
      mac: 00:a0:a5:e3:e3:8a
      mtu: 1500
      name: eno2
      numVfs: 6
      pciAddress: 0000:89:00.2
      totalvfs: 64
      vendor: "8086"
  • Vfs:
    • deviceID: "1889"
      driver: iavf
      mac: 96:91:da:da:4f:76
      mtu: 1500
      name: eno1v0
      pciAddress: 0000:89:19.0
      vendor: "8086"
      vfID: 0
    • deviceID: "1889"
      driver: iavf
      mac: b2:0f:94:f0:82:99
      mtu: 1500
      name: eno1v1
      pciAddress: 0000:89:19.1
      vendor: "8086"
      vfID: 1
      deviceID: 188a
      driver: ice
      eSwitchMode: legacy
      linkSpeed: 10000 Mb/s
      linkType: ETH
      mac: 00:a0:a5:e3:e3:89
      mtu: 1500
      name: eno1
      numVfs: 2
      pciAddress: 0000:89:00.3
      totalvfs: 64
      vendor: "8086"
  • Vfs:
    • deviceID: "1889"
      driver: iavf
      mac: ae:ea:e8:f7:19:dd
      mtu: 1500
      name: enp145s0f0v0
      pciAddress: 0000:91:01.0
      vendor: "8086"
      vfID: 0
    • deviceID: "1889"
      driver: iavf
      mac: 36:42:6d:a5:f5:ee
      mtu: 1500
      name: enp145s0f0v1
      pciAddress: 0000:91:01.1
      vendor: "8086"
      vfID: 1
      deviceID: 159b
      driver: ice
      eSwitchMode: legacy
      linkSpeed: 10000 Mb/s
      linkType: ETH
      mac: b4:83:51:06:b2:18
      mtu: 1500
      name: enp145s0f0
      numVfs: 2
      pciAddress: 0000:91:00.0
      totalvfs: 128
      vendor: "8086"
  • Vfs:
    • deviceID: "1889"
      driver: iavf
      mac: 4a:fe:56:ef:68:f4
      mtu: 1500
      name: enp145s0f1v0
      pciAddress: 0000:91:11.0
      vendor: "8086"
      vfID: 0
    • deviceID: "1889"
      driver: iavf
      mac: d2:85:3e:db:c6:1c
      mtu: 1500
      name: enp145s0f1v1
      pciAddress: 0000:91:11.1
      vendor: "8086"
      vfID: 1
      deviceID: 159b
      driver: ice
      eSwitchMode: legacy
      linkSpeed: 10000 Mb/s
      linkType: ETH
      mac: b4:83:51:06:b2:19
      mtu: 1500
      name: enp145s0f1
      numVfs: 2
      pciAddress: 0000:91:00.1
      totalvfs: 128
      vendor: "8086"
      syncStatus: Succeeded

@adrianchiris
Copy link
Collaborator

please create an issue for tracking so we can close once this is merged.
also add your results to the issue as described in:

https://github.com/k8snetworkplumbingwg/sriov-network-operator/blob/master/doc/supported-hardware.md#initial-support

@murali509
Copy link
Contributor Author

Could you help to merge into master if there are no further review comments?

@SchSeba
Copy link
Collaborator

SchSeba commented Oct 20, 2023

HI @murali509 just waiting for the issue so we can merge and close the issue together thanks!

@murali509
Copy link
Contributor Author

@SchSeba, issue was opened previously #501 (comment)

@SchSeba SchSeba merged commit 2bafe25 into k8snetworkplumbingwg:master Oct 20, 2023
10 of 11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants