I probably have 1 year to patent this. The below images are the system I came up with for a 'bio-telepathy' based off of the tension in the eye and jaw muscles. These five statuses allow for 25 different combinations that are respective to phonetics most commonly found in the English language. Summing up the combinations allows for the computer-speech of words without the user opening his eyes or mouth. I'm considering starting a kickstarter to fund this.
BLINK - level LOW
■ Updated: 08.25.2014
In addition to having a headset on, the users in the experiment would have a 'line-in' that would be an earpiece of some sort that has a robotic-ally created generic voice that feeds them the information. The information would be simple commands that are fed to the feed similar to a 'chat function'. The user interface is not graphical: initially it is based on users communicating with the server based on brainwaves and the server sending back audio information based on the total / sum of all of the users' communication posts. However, a GUI could be implemented for the controller who is monitoring all the feeds or for each unit separately (if they are using Google Glasses for instance).
I have already come up with an unusual system to communicate sounds across the feed, however it is not efficient, yet. The system is based on at least 22 phonetic 'base' sounds that are common in the english language.
a b e f g h i k l m o r s t u y sh ch th
a-umlaut e-umlaut i-umlaut
So the user would blink a blink code to come up with a combination of phonetic sounds then issue a blink command to post the sounds to the audio feed.
■ Original Post:
"Yesterday, thought about coming up with a 'blink-code', that is like a communication system for brain-waves. Basically the experiment would be set up for the participants to only communicate language via 'blinking'. They would all have a headset on. The end goal would be to correctly line up objects in the correct sequence. The user would do this without talking and using a specified 'language' for 'blinking' that could be memorized or displayed via Google Glasses. So, it would be like a sign language or Morse code for 'blinking', picked up via brain waves. Initially, I wanted to communicate via 'whisper-telepathy', where the user doesn't open their mouth but uses vocal chords to 'whisper' directions to the communication feed. Still experimenting. If I put the brain wave detector on the vocal spot, perhaps it could pick up vibrations of the user's vocal cords..."