Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove unwanted code from transformer_asr.py #1410

Merged
merged 3 commits into from
Jun 28, 2023

Conversation

SuryanarayanaY
Copy link
Contributor

In the example tutorial of Automatic Speech Recognition with Transformer, positional embedding self.pos_emb in the class SpeechFeatureEmbedding has initialised but not used anywhere.

Hence I am proposing to remove this line of code to avoid unnecessary confusion.

Fixes #1394

Thanks

In the example tutorial of Automatic Speech Recognition with Transformer, positional embedding self.pos_emb in the class SpeechFeatureEmbedding has initialised but not used anywhere.

Hence I am proposing to remove this line of code to avoid unnecessary confusion.

Thanks
In the example tutorial of Automatic Speech Recognition with Transformer, positional embedding self.pos_emb in the class SpeechFeatureEmbedding has initialised but not used anywhere.

Hence I am proposing to remove this line of code in .ipynb also to avoid unnecessary confusion.
In the example tutorial of Automatic Speech Recognition with Transformer, positional embedding self.pos_emb in the class SpeechFeatureEmbedding has initialised but not used anywhere.

Hence I am proposing to remove this line of code to avoid unnecessary confusion.
Copy link
Member

@fchollet fchollet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thank you!

@fchollet fchollet merged commit 4286291 into keras-team:master Jun 28, 2023
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

transformer_asr.py: positional embedding unutilized in source CNN downsampler.
3 participants