-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SaturationPlus Filter Parameter returns 0 #46
Comments
I got a new model trained after changing some of the code in filters.py. I modified the class SaturationPlusFilter() to more closely resemble the ContrastFilter. After doing this and training for 20000 iterations over my own data, I now am getting the Saturation filter to give me different parameters! Unfortunately it's consistently returning parameters between -0.45 and -0.51, so it's reliably reducing saturation instead of enhancing it, which is what I expected the training to teach the network to do based on my Uncorrected and Corrected image dataset... Here's the code I changed to get the SaturationPlusFilter to work:
Note that I've updated the code globally to reduce images down to a resolution of 256x256 pixels instead of 64x64 in order to preserve the histograms as much as possible since I'm working with very high-resolution (6000x4000) images and the default source image size seems to dropping too much information before processing. This may have some significant impact on the filters which I'm not aware of. So far, performance on my images seems to have improved with this change. |
On all of my results, I am seeing in the .Steps. image that the parameter used for the SaturationPlus filter is always 0.00. Sometimes it's 0.01, but it's never enough to make a meaningful difference in the next image.
I thought that maybe it was a problem with the data I used to train, but when I looked at the pretrained example's outputs, I saw the same issue with the SaturationPlus filter on those .steps. images.
That's the problem I'm experiencing right now. If anyone can offer some guidance, I would appreciate it. Now I'll describe some of the steps I've tried to take to fix it and explain why they haven't worked:
I noticed that Tensorflow now has a function for adjusting the saturation of images directly:
enhanced_s = tf.compat.v1.image.adjust_saturation(img, scale)
where scale is the multiplier applied to the input image's saturation. I tried replacing the process() function of the SaturationPlusFilter() class in filters.py with this function, of course there's no preexisting gradient for the adjust_saturation() function so I just hardcoded the scale value to 1.5 and used the output param to linearly interpolate between the input and enhanced images as Yuanming-Hu did, but the network still doesn't learn how to properly use the filter after 20000 iterations of training.
The text was updated successfully, but these errors were encountered: