Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Visualize ControlNet results using Weights & Biases Tables #156

Open
soumik12345 opened this issue Apr 12, 2023 · 11 comments
Open

Visualize ControlNet results using Weights & Biases Tables #156

soumik12345 opened this issue Apr 12, 2023 · 11 comments

Comments

@soumik12345
Copy link

Add optional support for visualization of ControlNet results using wandb.Table in train_controlnet_flax.py. This can be used for summarizing results during training and inference.

@soumik12345
Copy link
Author

cc: @sayakpaul

@sayakpaul
Copy link
Member

I am okay with the idea. Could you elaborate what are advantages of using wandb.Table() here as opposed to not using it?

@soumik12345
Copy link
Author

With wandb.Table, it would become very easy to Interactively explore the results in the Weights & Biases UI, compared to the media panels, thanks to additional features such as query, group-by, filter, sort (reference). Since tables are stored as artifacts, the results also become part of the lineage.

@sayakpaul
Copy link
Member

Thanks for providing the additional context!

Having used tables before, I can definitely see where you are coming from!

That said, we want our scripts to be minimal in terms of the features we support so that they are easily customizable. Do you have a (git) patch view of the changes that might be needed here to incorporate tables?

def log_validation(controlnet, controlnet_params, tokenizer, args, rng, weight_dtype):

@soumik12345
Copy link
Author

Hi @sayakpaul I have the changes on my fork here.

@sayakpaul
Copy link
Member

That is a bit hard for me to look through. Could you maybe try to do a git diff on the latest training script from this repository and provide the patch here?

@soumik12345
Copy link
Author

image

@yiyixuxu
Copy link
Contributor

yiyixuxu commented Apr 13, 2023

how do you compare different runs?

@soumik12345
Copy link
Author

how do you compare different runs?

On your dashboard, tables from different runs are actually concatenated in the same table if they are logged under the same key. The best way to compare against different runs, therefore, is to add an additional column called Run or Experiment which with have the value wandb.run.id, and after the table is logged, you can group by this column to compare results across different runs.

Note that I prefer to log run.id in the table instead of run.name because the run name is subject to change as per the user's convenience in the UI even after the run has been completed. The run.id is a unique id assigned to each run and its updated run name can still be viewed upon hovering over the index column of a table with logs from multiple runs as shown below.

image

@soumik12345
Copy link
Author

Hi, @sayakpaul @yiyixuxu
Please let me know your thoughts on this. I would love to raise a pull request if you give a green signal.

@yiyixuxu
Copy link
Contributor

@soumik12345 do you have an example that I can play with?

The main concern is that we want it as easy to use as possible. Assume a participant who never heard of Weights & biases before; our goal is to make sure:

  1. they can follow the instruction to create wandb account, run the example script and just be able to understand how to use the dashboard with only intuition
  2. able to read over the part of code that does wandb logging and have a rough idea of how it works; and able to adapt it for their training script without having too much of a learning curve

So I'm not in favor of adding a fancier logging feature to our example training script that will make the code harder to read and may require additional steps to interact on the dashboard. However, if you want to write a separate inference script with this, we are happy to link it :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants