A voice activated rotating museum. While the options are admittedly limited, this mini museum boasts two of the most famous paintings in the world, printed on the finest white printer paper. Simply ask to see one of the two paintings, either the Mona Lisa or The Scream. One raspberry pi, the client, will take input from a microphone, parse that into text and look for a related keyword. Once it recognizes a keyword related to either painting, it will send a request to the IP Address of a second Raspberry Pi, the server, that describes which painting was mentioned. The server pi will take that information and turn a planetary gear system either 90 degrees to the right or left to display one painting on the front window and then return.
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
- Client Python Libraries
- Speech Recognition (Used to recognize speech on client)
- pyaudio (Dependency of Speech Recognition)
- requests (To Send Information to the Server Pi)
- Speech Recognition (Used to recognize speech on client)
- Server Python Libraries
- Equipment
- NEMA-17 Stepper Motor (Actuator)
- Stepper Motor Driver
- Microphone (Sensor)
- NEMA-17 Stepper Motor (Actuator)
- Raspberry Pi 4 x2
- Breadboard power supply (Accepts power from a 2.1 x 5.5 mm plug from a 12 V wall adapter. Emits 12 V, 5 V, and 3.3 V at the same time.) Custom PCB
Simply download the files in this repository and upload the client.py code to one Raspberry Pi attached to the microphone, and the server.py code to another Raspberry Pi attached to the Stepper Motor.
In our design there was a lower box that housed the electronics, the upper box housed the gears and paintings.
- Sawyer Bailey Paccione - Client Code and Gear Design - Portfolio
- Olif Hordofa - Server Code and Box Design
This project is licensed under the MIT License - see the LICENSE.md file for details