Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Number m in Table 2 in your paper #19

Open
schnyox opened this issue Aug 10, 2017 · 5 comments
Open

Number m in Table 2 in your paper #19

schnyox opened this issue Aug 10, 2017 · 5 comments

Comments

@schnyox
Copy link

schnyox commented Aug 10, 2017

From the code I get that a short skip connection that concatenates the input to output of the bottleneck block. Shouldn't this connection exist in DenseNet.jpg? Also, can you explain why this block is called Bottleneck and what is it's purpose?

Best,
Nikola.

@schnyox schnyox changed the title Bottleneck short skip connection missing in DenseNet.jpg or not? Number m in Table 2 in your paper Aug 10, 2017
@schnyox
Copy link
Author

schnyox commented Aug 10, 2017

I changed the title of the Issue after compiling your model on my PC and running model.summary()

The number of feature maps after each block after the transition down blocks, i.e. the bottleneck block and the transition up blocks are n x k, where n, k are number of layers in a dense block and growth rate, respectively. The inputs to these blocks at the last layer in block are 880, 1072 etc. like the original values in the paper.

@grypes
Copy link

grypes commented Sep 1, 2017

Am I wrong? I just calculated the number of 'm' in the paper. below DB(12 layers)+TD, *m* =656 , I think DB(15 layers), *m* = 15*16+656 =896, not 880. I also read your code. It seems to be 896 too. Could you help me have a check? Thanks a lot.

@grypes
Copy link

grypes commented Sep 1, 2017

@decrispell
Copy link

Has this issue been resolved? I still don't see how the (corrected) m values of the bottleneck and decoder blocks are arrived at. From my understanding of Table 2 in the paper, each decoder block is composed of a TransitionUp followed by a DenseBlock. According to Figure 1 (and verified in the code), the channels from the skip connections are concatenated before the DenseBlock, not after, so the m values should just be the number of channels output by each DenseBlock, which is just 16*num_layers. What am I missing?

@decrispell
Copy link

I believe I now understand the issue, so I'll leave my thoughts here in case anyone else suffers the same confusion. Anyone with a better understanding of the work, please feel free to confirm / correct me.

The "m" value in Table 2 describes the dimension of the "stack" variable in FC-DenseNet.py. However, for all of the upsampling blocks except the last (as pointed out in issue #9) that variable is unused for anything further. So, the actual dimension of the information flowing to the next block is indeed just the output of the Dense Block (block_to_upsample in the code), which is of dimension num_layers*growth_rate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants