Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: trouble loading and using tensorflow savedmodel with custom input layer in keras, has been working fine for 2+ years and now currently looking for the solutions with my team #1924

Open
IOIntInc opened this issue Jun 27, 2024 · 1 comment

Comments

@IOIntInc
Copy link

Bug Description

Bug Reproduction

Code for reproducing the bug:

Data used by the code:

Expected Behavior

Setup Details

Include the details about the versions of:

  • OS type and version:
  • Python:
  • autokeras:
  • keras-tuner:
  • scikit-learn:
  • numpy:
  • pandas:
  • tensorflow:

Additional context

Sure, here's a concise version of your question, ready to send on GitHub:


Title: Trouble Loading and Using a TensorFlow SavedModel with Custom Input Layer in Keras

Body:

Hi everyone,

I'm currently working on a project where I need to load a TensorFlow SavedModel and use it with a custom input layer in Keras. However, I'm running into issues when trying to perform inference with the model. Here’s what I’ve done so far:

  1. Loaded the SavedModel as an inference-only layer:
    import tensorflow as tf
    from keras.layers import TFSMLayer
    from keras import Input
    from keras.models import Model

    base_model = TFSMLayer("path/to/savedmodel", call_endpoint='serving_default')

    input_layer = Input(shape=(32,), dtype='float64')

    output_layer = base_model(input_layer)

    model = Model(inputs=input_layer, outputs=output_layer)

  2. Converted test data to the required float64 dtype:
    import pandas as pd

    test_data = pd.read_csv("path/to/testdata.csv")
    test_data_float64 = tf.cast(test_data.values, tf.float64)

  3. Attempted to use the model for inference:
    predictions = model(test_data_float64)

However, I’m encountering issues with the input data type and shape compatibility.

My Questions:

  1. Data Type Compatibility: How can I ensure that the input data is correctly formatted and compatible with the expected input dtype of the TFSMLayer?
  2. Shape Issues: Are there any common pitfalls or best practices when dealing with custom input layers in Keras models that load TensorFlow SavedModels?
  3. Inference with Custom Layers: Is there a better approach to modify the input layer of a pre-trained TensorFlow SavedModel for inference in Keras?

Any guidance or suggestions on how to resolve these issues would be greatly appreciated. Thank you!


Feel free to post this question on GitHub, and it should be ready for others to understand and provide help without any proprietary information being leaked.

@IOIntInc
Copy link
Author

If someone could please help me with this i can pay or whatever it takes, am willing to share more code or whatever it is i need to post to find a solution/fix

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant