You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, thank you for your excellent work. I understand the advantage of tiled inference, but the way we use it confuses me. For each tile, we multiply the result of the inference with the weight. However, at final step, we then divide it by the norm mask (in the merge function). In my opinion, the action of dividing the results by the norm mask seems to produce a result without a weighting mechanism. Could you please explain this further? Maybe we would need a norm_mask containing different weight with the weight of inference result (for example norm_mask is an amount of inferences in each pixels which is different with pyramid_patch_weight_loss) to normalize correctly our result ? Thank you in advance !
To Reproduce
for tile, (x, y, tile_width, tile_height) in zip(batch, crop_coords):
self.image[:, y : y + tile_height, x : x + tile_width] += tile * self.weight
self.norm_mask[:, y : y + tile_height, x : x + tile_width] += self.weigh
def merge(self) -> torch.Tensor:
return self.image / self.norm_mask
The text was updated successfully, but these errors were encountered:
🐛 Question about tiled inference
Hello, thank you for your excellent work. I understand the advantage of tiled inference, but the way we use it confuses me. For each tile, we multiply the result of the inference with the weight. However, at final step, we then divide it by the norm mask (in the merge function). In my opinion, the action of dividing the results by the norm mask seems to produce a result without a weighting mechanism. Could you please explain this further? Maybe we would need a norm_mask containing different weight with the weight of inference result (for example norm_mask is an amount of inferences in each pixels which is different with pyramid_patch_weight_loss) to normalize correctly our result ? Thank you in advance !
To Reproduce
The text was updated successfully, but these errors were encountered: