AI-based spectacles that will tell blind people about their surroundings in real-time.
- It will assist visually impaired person in navigating from one place to another.
- It will have OCR technology which will help the visually impaired person to read books and newspapers.
- The facial recognition module will help the visually impaired person to know the person sitting in front of him/her.
It uses GTTS library(Google Text to Speech) to convert string to voice and Playsound Library is used to play the voice returned by GTTS
It uses Tesseract library w, which takes opencv frame as input, recognizes text in it and return text as string.
We have trained our own deep learning model. It works on a multimodal neural network that uses feature vectors obtained using both RNN and CNN, so consequently, for training, two inputs have to be taken. One is the image we need to describe, a feed to the CNN, and the second is the words in the text sequence produced till now as a sequence as the input to the RNN. This module takes OpenCV frame as input and returns a description of the frame.
It works on face_recognition that uses dlib's deep learning algorithm implementation to recognize the person in the image. It takes OpenCV frame as input and returns name as string.
Website Link : http://godseye.epizy.com/
Step 1: Download the repository as zipped file
Step 2: Extract zip
Step 3: Install dependencies from requirements.txt
Step 4: Run main.py
Step 5: Web Cam will start working. We have 3 Modes, Initially Live-environment captioning module will work (Mode 1, press 1 to start this mode), enter 2 to start Facial Recognition mode and enter 3 to start Optical Character Recognition Mode (i.e enter 1, 2 or 3 to switch between modes)
Step 6: Enter ESC button to end