Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why does batch norm layer has the parameter of weight and bias? #65

Open
AsakusaRinne opened this issue Aug 6, 2020 · 2 comments
Open

Comments

@AsakusaRinne
Copy link

In this tutorial, when explaining the weight file, it mentioned that if a convolutional layer has batch norm, then it will get 4 parameters from the weight file. The 4 parameters are: bn_weight, bn_bias, bn_running_mean, bn_running_var. But for batch normalization, why do we still need bn_weight and bn_bias? What kind of role do the bn_weight and bn_bias cast?
I am a freshman of DL so my question may seem to be dull...I will appreciate it if you could tell me the answer or where to find the answer.

@bot66
Copy link

bot66 commented Aug 13, 2020

just google batch normal definition, batch normal having learnable parameters , not just simply normalize the input data, every epoch will update its parameters.

@AsakusaRinne
Copy link
Author

just google batch normal definition, batch normal having learnable parameters , not just simply normalize the input data, every epoch will update its parameters.

Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants