-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Anisotropic regularization parameter #214
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the late review!
Looks OK in principle. I could not find reasonable objections to your code. Instead I am a bit concerned about your PR description. Your observations regarding convergence and performance issues are of course not something we should ignore. Just to understand: Is the current implementation working as before? Or would I have to tune all my configurations for previous analyses, e.g., the benchmark analysis? I just want to make sure, it will be safe to merge and use it (blindly) without breaking anything, we have not investigated enough. What do you think? Maybe we should talk about this.
Out of curiosity: How would one need to tell the tvd signature that we want to use the mesh size connected to the provided image? Maybe we should include this in the example, even if it is working merely poorly.
Interesting side note.
solver=darsia.Jacobi(maxiter=20), | ||
# solver=darsia.CG(maxiter=20, tol=1e-3), | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add comment like # Regularize image using isotropic tvd
# Feed the solver with parameters, follow the suggestion to use double the weight | ||
# for the diffusion coefficient if no value is provided. | ||
# for the mass coefficient if no value is provided. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest to enhance the comment to explain why additionally ell is set to 1 if it is zero. Furthermore, I would use an inexact check whether ell is (almost) zero.
def _neighbor_accumulation(self, im: np.ndarray) -> np.ndarray: | ||
def _neighbor_accumulation( | ||
self, im: np.ndarray, h: Optional[Union[float, list]] = None | ||
) -> np.ndarray: | ||
"""Accumulation of neighbor pixels. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better documentation would be good here. What is "accumulation" and how does it depend on the grid size provided?
assert len(h) == self.dim, "h must be a float or a list of length dim" | ||
voxel_weights = sum([1 / h[i] ** 2 for i in range(self.dim)]) | ||
|
||
return self.mass_coeff + 2 * self.diffusion_coeff * voxel_weights |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the 2 be moved into the definition of voxel_weights in line 89?
Added functionality to take physical derivatives and moved also included in the jacobi and cg solvers. However, it seemed to work poorly when tested with the tvd. It might be because the scaling in the linear system that is solved for the tvd gets off when the laplace term suddenly gets huge compared to the mass term. I guess that there in a heat equation setting normally would be some sort of cfl condition for the step size parameter here (but we don't have that). It could also be that I made a mistake somewhere and therefore it did not work. Nevertheless, I don't have more time to work on this for the coming weeks, so I removed it from the TVD and made a PR with the other changes.
It is now possible to prescribe different regularization parameters for the different directions when doing tvd (see example).
Sidenote: I do wonder if my placement of mu in the tvd-algorithm is suboptimal, and makes things converge slower than necessary. What I mean is that I could have substituted \mu\nabla g instead of just \nabla g when splitting the minimization problem. I'll have this in the back of my head and perhaps I'll try the other way arond later.