Mobile Computing at the Speed of Thought
Children in 20 years will laugh at the ape-like smartphone interfaces of today, imagining us tapping away and shuffling around a lump of plastic. With their Brain-Computer-Interface instantenously relaying messages via subdermal implant to the ubiquitous Cloud of Things, receiving visual feedback on the HUD of their semi-permanent contact lenses.
For now though, we'll have to make do with what we've got. The Myo isn't ready for prime-time (and you generally don't want to be waving your arms around in public), and the Magic Leap is still just hopping about in secret.
All you need is an app to be translator and middle-man, receiving commands, opening apps, and relaying commands to those apps. The input would obviously be limited to simple interactions - you couldn't communicate with it like Siri due the limitations of subvocal-recognition. But if input was contextual and interactive, it should be possible to decode input to a subset of expected terms to create powerful interfaces.
A basic example: you get a notifcation of a new email, and an option to have it read out to you, requiring only a subvocalized 'yes'. Or you get alerted to a phone call, which you can subvocalize 'ignore' to dismiss. More sophisticated examples: you could control your music player with simple commands like 'skip', 'replay', 'louder', etc. Or send a panic-signal to a friend if you get stuck in a dull conversation at a party.
Interactions can get more complex and interlaced. Here is an example exchange demonstrating some possibilities:
Phone: new message from SO "friends will arrive at 7"
Phone: acknowledgement sent
You: go home
Phone: the train will take 30 minutes and a 10 minute walk, or the bus will take 40 minutes with a 5 minute walk
Phone: walk north, then turn right on Station Street
You: resume book
Phone: [resumes your audiobook - navigation occasionally interrupts with instruction]
What would you want to do if your mobile computer was only a thought-away?