This enricher adds kubernetes data to the output of goflow2
based, by default, on the source and destination addresses of each record. It then exports the data to Loki.
Check overall documentation on netobserv/documents repository
A ConfigMap
can be set up and passed as command line argument via -config /path/to/config
, see the YAML example.
Configurable fields are:
The fields mapping can be overriden for more general purpose using the -mapping
option. The default is SrcAddr=Src,DstAddr=Dst
. Keys refer to the fields to look for in goflow2 output and values refer to the prefix to use in created fields. For instance, it could be possible to process the NextHop
field the same way with -mapping "SrcAddr=Src,DstAddr=Dst,NextHop=Nxt"
Generated fields are (with [Prefix]
being by default Src
or Dst
):
[Prefix]Pod
: pod name[Prefix]Namespace
: pod namespace[Prefix]HostIP
: pod's host IP[Prefix]Workload
: pod's workload, ie. controller/owner[Prefix]WorkloadKind
: workload kind (deployment, daemon set, etc.)[Prefix]Warn
: any warning message that could have been triggered while processing kube info
make all # = make fmt build lint test
(This image will contain both goflow2 and the plugin)
# build an image with version "dev":
make image
# build and push a test version:
IMAGE=quay.io/myuser/goflow2-kube VERSION=test make image push
To run it, simply pipe
goflow2 output to kube-enricher
.
If RBAC is enabled, kube-enricher
needs a few cluster-wide permissions:
- LIST on Pods and Services
- GET on ReplicaSets
Check goflow-kube.yaml for an example.
Assuming built image is quay.io/netobserv/goflow2-kube:dev
.
Since both goflow + enricher are contained inside a single image, you can declare the following command inside the pod container:
# ...
containers:
- command:
- /bin/sh
- -c
- /goflow2 -loglevel "trace" | /kube-enricher -loglevel "trace"
image: quay.io/netobserv/goflow2-kube:dev
# ...
Check the examples directory.
Example of output:
{"BiFlowDirection":0,"Bytes":20800,"DstAS":0,"DstAddr":"10.96.0.1","DstMac":"0a:58:0a:f4:00:01","DstNet":0,"DstPort":443,"DstVlan":0,"EgressVrfID":0,"Etype":2048,"EtypeName":"IPv4","ForwardingStatus":0,"FragmentId":0,"FragmentOffset":0,"IPTTL":0,"IPTos":0,"IPv6FlowLabel":0,"IcmpCode":0,"IcmpName":"","IcmpType":0,"InIf":12,"IngressVrfID":0,"NextHop":"","NextHopAS":0,"OutIf":0,"Packets":400,"Proto":6,"ProtoName":"TCP","SamplerAddress":"10.244.0.2","SamplingRate":0,"SequenceNum":577,"SrcAS":0,"SrcAddr":"10.244.0.5","SrcHostIP":"10.89.0.2","SrcMac":"0a:58:0a:f4:00:05","SrcNamespace":"local-path-storage","SrcNet":0,"SrcPod":"local-path-provisioner-78776bfc44-p2xkl","SrcPort":56144,"SrcVlan":0,"SrcWorkload":"local-path-provisioner","SrcWorkloadKind":"Deployment","TCPFlags":0,"TimeFlowEnd":0,"TimeFlowStart":0,"TimeReceived":1628419398,"Type":"IPFIX","VlanId":0}
Notice "SrcPod":"local-path-provisioner-78776bfc44-p2xkl"
, "SrcWorkload":"local-path-provisioner"
, "SrcNamespace":"local-path-storage"
, etc.
First, refer to this documentation to setup ovn-k on Kind. Then:
kubectl apply -f ./examples/goflow-kube.yaml
GF_IP=`kubectl get svc goflow-kube -ojsonpath='{.spec.clusterIP}'` && echo $GF_IP
kubectl set env daemonset/ovnkube-node -c ovnkube-node -n ovn-kubernetes OVN_IPFIX_TARGETS="$GF_IP:2055"
or simply:
make ovnk-deploy
Similarly:
kubectl apply -f ./examples/goflow-kube-legacy.yaml
GF_IP=`kubectl get svc goflow-kube-legacy -ojsonpath='{.spec.clusterIP}'` && echo $GF_IP
kubectl set env daemonset/ovnkube-node -c ovnkube-node -n ovn-kubernetes OVN_NETFLOW_TARGETS="$GF_IP:2056"
- Pre-requisite: make sure you have a running OpenShift cluster (4.8 at least) with
OVNKubernetes
set as the network provider.
In OpenShift, a difference with the upstream ovn-kubernetes
is that the flows export config is managed by the ClusterNetworkOperator
.
oc apply -f ./examples/goflow-kube.yaml
GF_IP=`oc get svc goflow-kube -ojsonpath='{.spec.clusterIP}'` && echo $GF_IP
oc patch networks.operator.openshift.io cluster --type='json' -p "$(sed -e "s/GF_IP/$GF_IP/" examples/net-cluster-patch.json)"
or simply:
make cno-deploy
You can use app=goflow-kube
label to retreive any components deployed.
Show all components:
kubectl get all -l app=goflow-kube
Get pod logs:
kubectl logs -l app=goflow-kube