Horace Dediu suggests understanding the history of disruption in computing in terms of the history of input and (secondarily) output methods. I think there’s a lot to this, and the overall trend seems to be in the direction of lessening mediation between person and computer. Siri is a great example — conversing with one’s computer is a very immediate experience. Horace is on record in several media predicting that voice control will be the basis of a future low-end disruption.
The ideal computer might interface directly with the brain, responding to internal visualizations or subvocalized commands and outputting directly to the visual and auditory cortexes. It’s already been envisioned in science fiction pieces too numerous to mention. (I would love to know who was the first to write about it.) If direct brain interface is the end goal, what are some intermediate steps before it becomes a reality? I have long dreamed of a projector which could draw directly onto one’s retina.
Google’s Project Glass goggles are not quite that, but they are at least trying to solve the same problem. However, I share John Gruber’s contempt for “concept demos”, and in my heart I believe that if Google truly expected to profit from this innovation, they would keep it secret until they were ready to put it in the public’s hands.
I will be keeping my eyes open, no pun intended, for input and output innovations which offer immediacy, and are, unlike Project Glass, products which people can actually buy.