Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated Willow (though WAS) and also WIS on server. #121

Open
bert269 opened this issue Sep 26, 2023 · 10 comments
Open

Updated Willow (though WAS) and also WIS on server. #121

bert269 opened this issue Sep 26, 2023 · 10 comments

Comments

@bert269
Copy link

bert269 commented Sep 26, 2023

Then I updated to the latest (willow-RC2) software on each of the ESP32-S3 (and one S3-lite) device and none is responding. They respond to the wakework, but just sits there. This is a screenshot from what's happening on WIS:
image

I'll flash the previous RC-1 to the first S3 and see what happens. All the willow were working fine still this morning, util I updated everything.

@bert269
Copy link
Author

bert269 commented Sep 26, 2023

I re-installed RC-1 (OTA) to the S3-lite device.
No change,. So I rebooted the WIS server and this was the result after I issued the wake-word with a command:
image

Then a few moments later. I saw this on the WIS server display:
image

Houston - we have a problem....

@bert269
Copy link
Author

bert269 commented Sep 26, 2023

Stopped WIS and WAS.
Then I run sudo apt update && sudo apt upgrade -y and it had 48 packages to install (I just did this two days ago). Included were some Nvidia pacakges - so maybe, just maybe, I had older code for a newer WIS?
Rebooted the server.
Started WAS and saw the clients.
Re-installed WIS - as I am not sure with the older NVidia code (from before OS upgrade), what happend earlier when I tried to install it. It seems to me that more code is being downloaded during the WIS re-install.

Nothing - no response from the S3 devices. Not even a reponse back from WIS.

@bert269
Copy link
Author

bert269 commented Sep 26, 2023

The PROBLEm is the BEAM size....! a while back someone on the github response wold me to use BEAMsize=5. Well, starting up the server, I noticed it uses BEAM-SIZE=1. So I modified my config, flash the S3-lite with beam-size=1 and it magically worked.

WHAT is the difference - why did the beam size owrked before with 5 and not it requires 1?
AT elast it is working and I am happy again.

@kristiankielhofner
Copy link
Contributor

We've never made an API change between Willow and WIS so I'm not sure why anything with the releases should matter.

Beam size tweaks a parameter inside the model more-or-less regarding accuracy. People throw a lot of things around on Github, the internet, etc but the medium model with beam_size=1 is the current WIS default. We've been doing extensive testing and the next release of WIS will actually default to small beam_size=2. That said we also routinely test all of the possible model and beam_size configurations.

I've never seen and we've never had reports of there being issues with any of the models or beam sizes, the only thing I can think of is larger models and larger beam sizes use more GPU memory. What model of Nvidia GPU do you have? We do a lot on WIS startup to define parameters for even really low VRAM and older GPUs, I myself extensively test with a GTX 1060 6GB which is towards the lowest GPU we could support.

Also, you probably want to be a little more careful randomly upgrading Nvidia drivers. We have extensive compatibility with Nvidia drivers as well (we test with 525-535 on Ubuntu) but like all driver upgrades there can definitely be lower-level difficult to debug issues from swapping random drivers around.

@kristiankielhofner kristiankielhofner transferred this issue from toverainc/willow Sep 26, 2023
@bert269
Copy link
Author

bert269 commented Sep 26, 2023

I am running WIS on a GTX 1080Ti.
As I said I did a full upgrade of the Ubuntu 22.04.1 only a coupld of days ago and everything was still working fine. The today I noticed that there were some repository changes and upgraded everything.
Well, it is working now - so for me that is what matters most. Let me know if you need any reports or other testing to be done with this - I am willing to help.

@kristiankielhofner
Copy link
Contributor

The GTX 1080Ti has 12GB VRAM which is enough to run several instances of WIS concurrently, you definitely shouldn't be having any issues running any models at any beam size. Your issue with beam size five does point to some kind of strange card/driver quirk we have never seen before.

I mentioned doing Nvidia driver upgrades carefully in the general sense as there is definitely potential to encounter some rough edges given how fundamental they are. It's generally safe but every once in a while things go wrong.

@bert269
Copy link
Author

bert269 commented Sep 28, 2023 via email

@nikito
Copy link
Contributor

nikito commented Sep 28, 2023

@kristiankielhofner
Copy link
Contributor

kristiankielhofner commented Sep 28, 2023

@bert269 - It is still very strange you're having problems with beam_size=5. Can you provide your configured full WIS URL when you try that option? Please copy and paste it here (instead of a screenshot).

On another note, you almost certainly don't need and probably don't want model=large and beam_size=5. We have found that for typical short speech commands those settings are almost too smart and reduce the consistency of responses, which is important because HA, etc match on the exact text returned for processing commands.

@bert269
Copy link
Author

bert269 commented Oct 20, 2023

I had to re-install everything this rmorning. I will test this out as soon as I get the other issue resolved and provide feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants