Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Input_shape (None, None, 3) #98

Open
pluniak opened this issue May 9, 2019 · 5 comments
Open

Input_shape (None, None, 3) #98

pluniak opened this issue May 9, 2019 · 5 comments

Comments

@pluniak
Copy link

pluniak commented May 9, 2019

Referring to closed issue #65:

I still can't load the model with input_shape=(None, None, 3) using the latest (TF2-based) version of this repo:

Deeplabv3(weights='pascal_voc', input_shape=(None, None, 3), classes=1, backbone='xception', OS=16)

gives me

ValueError: Cannot convert a partially known TensorShape to a Tensor: (None, None)

caused by lines 417-419 in model.py. I can load the model when replacing x.shape with tf.shape(x) in line 418, but then I'm getting another shape error during training (model.fit). Am I doing something wrong? Do others get the same error message? I tried to solve the issue on my own, but it seems to be beyond my skills.

Additional info: using input_shape of (248,248,3) causes

ValueError: A Concatenate layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 64, 64, 256), (None, 62, 62, 48)]

in Line 425 in model.py. The TF1.13-based version of this repo accepts 248*248 input images.

@yingshaoxo
Copy link

With tensorflow 2.0.0-beta1

    b4 = Lambda(lambda x: tf.compat.v1.image.resize(x, size_before[1:3],
                                                    method='bilinear', align_corners=True))(b4)

tf.image.resize_images(
    images,
    size,
    method=ResizeMethodV1.BILINEAR,
    align_corners=False,
    preserve_aspect_ratio=False,
    name=None
)

The size parameter must be a 1-D int32 Tensor

https://www.tensorflow.org/api_docs/python/tf/image/resize_images#args

@yingshaoxo
Copy link

yingshaoxo commented Jul 25, 2019

If I use deeplab_model = Deeplabv3(input_shape=(None, None, 3), classes=4), I'll get :

size_before:  (None, None, None, 320)
size_before[1:3]:  (None, None)

which cause error:

Traceback (most recent call last):
  File "main.py", line 3, in <module>
    deeplab_model = Deeplabv3(input_shape=(None, None, 3), classes=4)
  File "/home/yingshaoxo/Codes/keras-deeplab-v3-plus/model.py", line 385, in Deeplabv3
    method='bilinear', align_corners=True))(b4)
  File "/usr/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 662, in __call__
    outputs = call_fn(inputs, *args, **kwargs)
  File "/usr/lib/python3.7/site-packages/tensorflow/python/keras/layers/core.py", line 785, in call
    return self.function(inputs, **arguments)
  File "/home/yingshaoxo/Codes/keras-deeplab-v3-plus/model.py", line 385, in <lambda>
    method='bilinear', align_corners=True))(b4)
  File "/usr/lib/python3.7/site-packages/tensorflow/python/ops/image_ops_impl.py", line 1180, in resize_images
    skip_resize_if_same=True)
  File "/usr/lib/python3.7/site-packages/tensorflow/python/ops/image_ops_impl.py", line 1041, in _resize_images_common
    raise ValueError('\'size\' must be a 1-D int32 Tensor')
ValueError: 'size' must be a 1-D int32 Tensor

@yingshaoxo
Copy link

In the end, I have no choice but reload model every time before taking a new image as input:

import numpy as np
from PIL import Image
from model import Deeplabv3

image = np.array(Image.open('imgs/image1.jpg'))
image_shape = image.shape

deeplab_model = Deeplabv3(input_shape=(image_shape[0], image_shape[1], 3), classes=4)

image = np.expand_dims(image, axis=0)
y = deeplab_model.predict(image)

print(y)
print(y.shape)

@yingshaoxo
Copy link

Wait for a minute.

Maybe we don't need this feature at all.

Because normally, you'll deal same resolution images at one specific task.

For example, process a whole length of movie or video.

(Or you can still use a fixed resolution image as input, then scale the resulting mask by yourself.

@pluniak
Copy link
Author

pluniak commented Sep 4, 2019

That's a workaround indeed. Resizing might affect segmentation performance though, as object dimensions change. Moreover, it's computationally expensive when doing on the fly or you need a second copy of your images on hard disk if you wanna do it up front. That's why flexible model inputs are desirable I think.
Still I wann give great props to @bonlime for the Keras implementation of this model. Awesome job and many thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants