Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

请教! #2

Open
h20181007031 opened this issue Jun 25, 2024 · 3 comments
Open

请教! #2

h20181007031 opened this issue Jun 25, 2024 · 3 comments
Labels
question Further information is requested

Comments

@h20181007031
Copy link

您好!抱歉打扰您一下,对于您的项目,我有一些问题想要请教您。您的项目是否是用深度学习网络的方式来生成半色调图像呢?就是不同于以往的传统半色调算法那样,而是采用模型的方式来实现半色调处理过程?

@Nikronic Nikronic added the question Further information is requested label Jun 27, 2024
@Nikronic
Copy link
Owner

Hi,

Firstly, I am using Google Translate, so I might not be entirely accurate on understanding your question.

But about the (possible) answer, the original pipeline uses the classic halftone algorithms on Places365 dataset, then the hafltoned images are fed into the entire Deep Learning pipeline. In the end, the original image of Places365 dataset (before going through halftone) is compared to the output of model (autoencoder architecture).

In fact, the original title of the paper says "rescreening halftone images" which is the process of removing halftone. So, to generate the training data, classic halftone algorithms are used on normal images (Places365).

I hope it answered your question.

PS: I think I implemented the loss function a bit incorrect in terms of implementation, but the structure of the code should represent the paper's architecture correctly.

@h20181007031
Copy link
Author

h20181007031 commented Jun 28, 2024 via email

@Nikronic
Copy link
Owner

@h20181007031

Yes, you are correct. The current implementation is reverse halftone. But the architecture and the methodology used should be the same for your purpose too. In simplest terms, you need to feed the original image (not processed at all) to your network, then compare the final output of the network to manually halftoned images generated via classical approaches.

But one question you should ask yourself constantly, and it is that why one should rely on a deep leaning (more resource hungry) model instead of classical approach.

Best regards,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants