Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about lack of positional encoding in timeseries_classification_transformer.py #1894

Open
Atousa-Kalantari opened this issue Jul 20, 2024 · 3 comments

Comments

@Atousa-Kalantari
Copy link

Issue Type

Bug

Source

source

Keras Version

Keras 2.14

Custom Code

Yes

OS Platform and Distribution

No response

Python version

No response

GPU model and memory

No response

Current Behavior?

I noticed that positional encoding is not used in the timeseries_classification_transformer.py example. Given the importance of sequence order in time series data, why was this omitted? Does this impact the model's effectiveness for time series classification? I'd appreciate any insights on this design choice. Thank you.

Standalone code to reproduce the issue or tutorial link

https://github.com/keras-team/keras-io/blob/master/examples/timeseries/timeseries_classification_transformer.py

Relevant log output

No response

@sachinprasadhs
Copy link
Collaborator

Tagging the author of the example for more info: @ntakouris , could you please have a look into this.

@arun-nemani
Copy link

I've been asking the same question!

@Atousa-Kalantari
Copy link
Author

Hi,

I’m still waiting for a response on my question about the lack of positional encoding in the timeseries_classification_transformer.py example. Can someone clarify why it was omitted and its impact on model performance?

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants