I was having a very Zen moment on Saturday, influenced by Do Androids Dream of Electronic Sheep, where I was trying to imagine what/how software should be built in the future.
It starts from my dislike of the paradigm for computer interaction we have now, and even worse how we program computers (based on system intention, and functionality capture and workflow’s).
So there I was driving and thinking how I’d love to control a computer with my mind, and then was thinking how I’d program, letter, word or concept at a time. I was thinking of a current project, where the largest part is the user interaction. How to create the interaction, and build the “back-end” into it later. How I’d love to be able to dump my mental picture, and interact with the model with the rest of the team.
Sort of in a drag and drop way we have now with UI, but it all very much smarter. Like currently we drag-n-drop widgets, then it’s done, the maintenance part is hard. Or how you make decisions about “not using security”, yet later this is required. So almost the workflow of what should be in programs, and the parts understanding this.
I see it as a live workflow of what is left to-do, and what is done, so things can be deferred, or revisited.
So while I have this organic system in mind, I was acutely aware that our current connectivity with a system is so woefully constrained. Keys for written text, and a pointing device. I want more direct control over high level concepts. I want things to move along, and I’m inpatient and not sure our commercial markets will allow the changes required as anything new will first have to surpass the current. Like we are in an interaction local maximum.
I want to create the same way I can hear/write/read that voice in my head in conceptual blocks.
Hmmm, lucky it’s swimming time, otherwise I think I would have just depressed myself too much to handle C++ via a QWERTY keyboard right now.