Siri Epoc Prototype (in Process)
This video demonstrates a prototype for the virtual assistant side of the thesis project. The brainwave controller will be implemented at a later date. For now, the user can activate the speech recognition on the iPhone with a hand over the front camera. Once the speech recognition is in active listening mode it will listen for speech from the user. Once the user speech, the voice assistant starts listening for silence to determine when to stop listening and process the speech. After processing the speech, the virtual assistant will display the recognized speech and response text along with the action.
Here’s the video below (in .flv format due to size limitations):
And below I have included some screen captures from the iPhone app: