M.F.A Thesis Topic Selection
The purpose of this document is to select and propose a single project for design and development on an M.F.A. thesis. The selection process started with the following three initial concepts:
- Augmented Reality in Mobile Games to Improve Family Relationships
- Gesture Recognition for Touch-less Gaming Systems
- Mind-controlled systems for Human-to-Machine Interaction with Voice Assistants
After reviewing peer feedback and evaluating my background, interests, and resource availability, the decision was made to pursue the thesis topic titled “Mind-Controlled Systems for Human-to-Machine Interaction with Voice Assistants.” This topic, although some may perceive to be most the ambitious and complex, is most compatible with my recent technological experiences and fulfills the necessity to push the work in brainwave and voice technology forward as it relates to mobile devices and interactivity.
Proposal for Mind-Controlled Systems for Human-to-Machine Interaction with Voice Assistants
In recent years, several EEG/brainwave systems and SDKs have been released to developers for research or development of mind-controlled interfaces. Many of these systems consist of some type of wearable sensor, such as a headset fitted with electrodes that detect electrical activity from the brain. The potential use of these brainwave-analyzing systems in games and other interactive media has attracted much interest. The idea of controlling a game or a computer program with the mind is alluring and exciting to many.
New applications that make use of these newer technologies have the additional benefit from the widespread use of mobile devices, smartphones, and tablets. Devices such as the Emotiv Epoc and Neurosky’s MindWave have the capability of connecting through Bluetooth and the availability of SDKs (Software Development Kits) that provide resources for developers in the creation of unique interfaces and applications. Many of these SDKs can, if carefully implemented, foster the creation of groundbreaking innovation.
Another area drawing increased interest is that of speech recognition in virtual assistants, such as the iPhone’s Siri. While these virtual assistants are gaining interest to many users, some users may find that they still have difficulty communicating effectively with the system. Even the best natural-language-understanding fails if the system does not hear the words being spoken correctly. These speech recognition systems depend upon quality microphones and good speaking environments to be useful. If the speaker does not speak clearly or if the background sound is too loud, the system may fail to recognize the words.
The use of mind-controlled systems, such as the Emotiv Epoc or Neurosky’s MindWave as an integrated part of a voice-recognition assistant can enhance the effectiveness of a voice assistant. In order to explore this avenue, an iPhone application would be created the uses the Siri speech-recognition as its core component. The application would allow interactivity through voice-recognition and brainwave control. The system would provide feedback to the user through a custom natural language understanding algorithm to perform user-initiated tasks on the iPhone.
The visual component of this project will consist of a brainwave controller (i.e., the Emotiv Epoc or Neurosky MindWave) that is paired to an iPhone, which is running the custom application. The custom application will implement methods for listening to the brainwave controller for commands, processing the commands as they are interpreted from the controller, and executing the appropriate action on the iPhone based upon the information processing. The interface will also allow the user to use voice commands in order to evaluate some of the effectiveness of the speech recognition and “brainwave recognition” processing. The application will use only Siri speech recognition and text-to-speech capabilities as well as a limited natural language processing algorithm for interpreting the patterns. It is also possible that the voice recognition will be used as part of the “brainwave” action training for the system. The visual component application will allow users to perform a variety of tasks on the iPhone with brainwave commands. These basic tasks may include the following from within the application:
- Opening new view or menus
- Activating buttons
- Opening the Maps app
- Playing a video
- Sending a text message
Evaluations of the working system would provide support for the benefits and effectiveness in improving the quality of the human-machine interface. A video will be made showing the use of both the voice-controlled system as well as the brainwave-controlled system.