Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/better gpu #286

Merged
merged 6 commits into from
Sep 5, 2024

Conversation

Theoreticallyhugo
Copy link
Contributor

  • adding support for apple M chip gpus (usage and power, but not vram)
    current solution sadly requires sudo privileges. if i can find something better, i will make another pull-request
  • changing to nvidia gpu detection by checking for presence of nvidia-smi tool.
    if nvidia-smi is not installed, it doesnt matter whether an NVIDIA gpu is installed, since we couldnt read the data anyways.
    if it is installed, theres most probably also a gpu, because why would you install nvidia drivers otherwise?
  • changing from dracula-ignore-lspci to dracula-force-gpu option.
    by default its set to false, and doesnt do anything, but you can set it to "NVIDIA", making it as if NIVDIA was detected.
    hopefully this will not need to be used for nvidia anymore with the switch to type -a nvidia-smi for detection.
  • adding percentage option for vram and power
  • adding option for finer grained info on vram usage

since i neither own an intel, nor amd gpu, i cannot test these changes for the respective brands. however im planning on teaming up soon to tackle that too.

roadmap:

  • intel/ amd support
  • multiple gpu support
  • adding all this info to the documentation

@ethancedwards8
Copy link
Member

ethancedwards8 commented Sep 3, 2024

Thank you for your extensive contribution. As this is such a large pull request, I would feel better if there was documentation for the new options, specifically for the apple GPU powermetrics option. I find it unlikely that most people will look in the code to see that they must add the command into their sudoers file.

if it is installed, theres most probably also a gpu, because why would you install nvidia drivers otherwise?

Some Linux distributions may come with them preinstalled.

@Theoreticallyhugo
Copy link
Contributor Author

im definitely adding all the options to the documentation im currently working on.
additionally i could add the sudoers note into the readme, so that people who add the gpu option will have to see it.
regarding the nvidia-smi option i have an idea.
what if checking for nvidia-smi is a fallback option if the lspci command doesnt yield any result?

@ethancedwards8
Copy link
Member

im definitely adding all the options to the documentation im currently working on. additionally i could add the sudoers note into the readme, so that people who add the gpu option will have to see it.

Please do.

regarding the nvidia-smi option i have an idea. what if checking for nvidia-smi is a fallback option if the lspci command doesnt yield any result?

I am good with that.

@@ -45,7 +57,7 @@ main()
{
# storing the refresh rate in the variable RATE, default is 5
RATE=$(get_tmux_option "@dracula-refresh-rate" 5)
gpu_label=$(get_tmux_option "@dracula-gpu-usage-label" "GPU")
gpu_label=$(get_tmux_option "@dracula-gpu-power-label" "GPU")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would break any existing configs. What is your rational for changing this?

}

main()
{
# storing the refresh rate in the variable RATE, default is 5
RATE=$(get_tmux_option "@dracula-refresh-rate" 5)
gpu_label=$(get_tmux_option "@dracula-gpu-usage-label" "VRAM")
gpu_label=$(get_tmux_option "@dracula-gpu-vram-label" "VRAM")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe VRAM should be its own config option and usage stay as it is.

@Theoreticallyhugo
Copy link
Contributor Author

currently we have three different widgets using the @dracula-gpu-usage-label option: usage, power, and vram. that doesnt make a lot of sense to me, especially since for vram it defaults to "VRAM" and power and usage default to "GPU". this differentiation of labels is lost as soon as the option is set.
i think that each of the widgets should have its individual option, which is why i changed the option for two of them.

@Theoreticallyhugo
Copy link
Contributor Author

nvidia-smi detection is now used as fallback option.
tested on manjaro with no lspci issues
tested on ubuntu with no lspci installed. uses nvidia-smi as fallback
tested on macos with an M1

@ethancedwards8 ethancedwards8 merged commit ae99a70 into dracula:master Sep 5, 2024
1 check passed
@ethancedwards8
Copy link
Member

Thank you so much! I appreciate all the work you have done recently.

@Theoreticallyhugo
Copy link
Contributor Author

and theres more work to come!
in private ive been maintaining a fork for more than a year now, but i havent really felt confident to contribute to a public project yet. now that i do, theres a backlog of features id like to contribute, especially since id like to switch to using the original repo instead of my fork in the future.

@ethancedwards8
Copy link
Member

Well everything you've submitted so far has been great! Keep doing what you're doing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants