We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
你好,阅读了您的代码后,有个想法希望沟通一下: 从训练的时候看lut表关系大致表述为: result = w1 * lut1(img) + ...+ wnlutn(img) ------1 从demo_val中看后面的表述关系大致为: blendlut = w1lut1+...+wn*lutn result = blendlut(img) -------2
假如上面的1和2等价的话, 那么是否就是说 如果训练足够多,应该就能用一个lut表来表征呢?
另外一个问题是:如何评价训练结果,即什么时候可以中止训练?
期待您的答复 谢谢
The text was updated successfully, but these errors were encountered:
11和2始终是等价的,但是一个LUT表达不了,w是一组随图像内容变化的权重。 可以在验证集上算指标观察训练过程,代码里应该有打印训练过程中的指标。
Sorry, something went wrong.
哦, 多谢了
在您的论文之后,有两篇类似的论文: ---Real-time Image Enhancer via Learnable Spatial-aware 3D Lookup Tables ---Learning Pixel-Adaptive Weights for Portrait Photo Retouching
似乎是要解决在不同空间位置的相同颜色可以映射为不同颜色的问题,这样对于不同大小的图片经过了resize后效果不知道怎么样?
另外,如何评价使用自适应lut表处理后的图像质量呢? 用PSNR/SSIM? 怎么主观感觉处理后的图像的清晰度(锐度)相比原张有所下降?
多谢
No branches or pull requests
你好,阅读了您的代码后,有个想法希望沟通一下:
从训练的时候看lut表关系大致表述为: result = w1 * lut1(img) + ...+ wnlutn(img) ------1
从demo_val中看后面的表述关系大致为: blendlut = w1lut1+...+wn*lutn
result = blendlut(img) -------2
假如上面的1和2等价的话, 那么是否就是说 如果训练足够多,应该就能用一个lut表来表征呢?
另外一个问题是:如何评价训练结果,即什么时候可以中止训练?
期待您的答复
谢谢
The text was updated successfully, but these errors were encountered: