Milvus - Kafka partitioning strategy #36224
Replies: 2 comments 6 replies
-
So far as I know, milvus doesn't specify any partition when creating consumer of kafka. I suppose it always uses the default/first partition. |
Beta Was this translation helpful? Give feedback.
-
Thank you so much @yhmo for this good information. So with this it is clear that even if we configure multiple partitions in kafka topics it do not have any impact as Milvus uses single parition stragely to publish message. Do Milvus have any impact if we change kafka log file size. By default the max log file size in kafka broker's is 1 GB. Assume huge data ingestion (Billion in volume /TB's in size) with this data if we increase kafka log file size to higher value (5GB) will it help Milvus data node/query node to process fast? |
Beta Was this translation helpful? Give feedback.
-
Hello Team, we are using kafka as a message stream with Milvus. when we logged into broker and watch the log directory. we always observing that only 1 partition is being used by log files though we have overriden broker config (num.partitions = 2) to create 2 partitions. Milvus writing data to only 1 partition the other partition log file is always 0 bytes. In below snapshot only partition 0 is considered by Milvus. Appreciate if you can share which partition strategy is being used by Milvus and also how to obtain Producer and consumer configs? Do we gain any advantage if we increase number of partition in kafka if ingestion rate is too high, the default is 1.
Beta Was this translation helpful? Give feedback.
All reactions