Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

encoding and decoding are both hardware-accelerated using GPUs,does BMF copy the GPU data back to memory #92

Open
lxllsl opened this issue Jan 16, 2024 · 1 comment

Comments

@lxllsl
Copy link

lxllsl commented Jan 16, 2024

case:
ffmpeg -vsync 0 -hwaccel cuda -hwaccel_output_format cuda -hwaccel_device 0 -c:v h264_cuvid -i input.h264 -c:v nvenc_h264 output.h264

Q:
The encoding and decoding are both hardware-accelerated using GPUs, with decoding completed on the GPU and encoding also completed on the GPU, without any memory copying from GPU to host memory.
In this case, what are the differences between BMF’s processing of the decoding process and CPU mode decoding? Are both using vaFrame to receive decoding results, with the address in vaFrame being on the GPU for one and on the CPU for the other?
Also, in this scenario, does BMF copy the GPU data back to memory, encapsulate it as a TASK, and place it in the scheduling queue? Would this not change the original purpose of reducing memory copies?

@taoboyang taoboyang assigned taoboyang and unassigned taoboyang Jan 16, 2024
@taoboyang
Copy link
Collaborator

taoboyang commented Jan 18, 2024

In the BMF framework, we support data flow on the GPU-only link. If your entire pipeline runs on the GPU, data should not be copied to the CPU. The demo master/bmf/demo/gpu_module is a case of using the BMF framework to perform image processing on the GPU link. In my opinion, in addition to supporting data flow on the GPU link, BMF has the following advantages:

  1. BMF supports convenient data transfer between CPU/GPU
  2. BMF provides a convenient backend interface for data transfer between common image processing frameworks such as bmf, torch, and opencv.

If you have more questions and problems, and have spare energy, we sincerely invite you to download lark and join our user group. All R&D personnel are internal: https://applink.feishu.cn/client/chat/chatter/add_by_link?link_token=4cev1bee-4d94-42c8-972b-4ae4a12c9da1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants