You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I trained a model on my own data set of 22,000 256x256 jpg image pairs. But when I use test to generate B images from A images that are also 256x256, the results are only the left 128x256 pixels of the image stretched out to 256x256.
During training, the output in visdom looked correct, no stretching. I cannot figure out what would cause this. I've not added any extra switch values.
My command for test is
python test.py --dataroot ./datasets/biosphereNormalMaps --name biosphereNormalMaps --model pix2pix --direction AtoB
I've tried making the test images jpg and png with the same result.
The text was updated successfully, but these errors were encountered:
Oh I see what it is now. The test script assumes that the images provided have an A domain image on the left and a B domain image on the right. So when I give it a 256x256 image that is just an A image, it stretches that image to 512x256 and treats the left half as A and the right half as B.
Is there a command line option, or a dedicated script, for generating images from data that is just a single A or B image?
I trained a model on my own data set of 22,000 256x256 jpg image pairs. But when I use test to generate B images from A images that are also 256x256, the results are only the left 128x256 pixels of the image stretched out to 256x256.
During training, the output in visdom looked correct, no stretching. I cannot figure out what would cause this. I've not added any extra switch values.
My command for test is
python test.py --dataroot ./datasets/biosphereNormalMaps --name biosphereNormalMaps --model pix2pix --direction AtoB
I've tried making the test images jpg and png with the same result.
The text was updated successfully, but these errors were encountered: