Bret Victors current project is Dynamicland, a real space he’s been building to explore computing within the physical world, with humans at its center. The system he invented is known as Realtalk, and is integrated into the building, where cameras and projectors engage with the humans within it.
https://dynamicland.org/
I don't know about full 3D manipulation, but we used buttons and switches and levers for decades. They worked great. They are easy to see, feel, and manipulate. They can't move or change, so muscle memory develops naturally. You can use them while not looking and with gloves on - essential when driving or operating other dangerous machinery. The natural constraints of them meant that they were ready to discover and learn.
Over time we got better at packing more into a button. Multimodal buttons became a thing, and once alphanumeric displays become practical, a single knob or arrow keypad to scroll through tiny menus became common. Twenty years after that, everything is a flat, touch sensitive button, and more and more we hide them from even being visible until they are interacted with.
I believe the reasons for this are two fold. First, they are futuristic. People are drawn to new, shiny things, even if they are functionally worse. The second is cost. Physical switches are quite expensive compared to digital electronics. As touchscreens have become cheaper, they have replaced more physical inputs. Similarly, they have replaced discrete indicator lights, meters, and other single purpose indicators.
The cost savings is modest in terms of an entire product, but businesses will spend days saving one cent in production costs because it adds up over millions produced. It usually amounts to a small percentage of profits, but they'll chase anything.
Great points, it just occurred to me that I’m biased towards high level applications! HCI equally applies to our appliances, factories, and vehicles to name a few, and I absolutely agree that physical affordances are not quite as outmoded as Musk might say they are.
Fun fact that we don’t talk about enough: I’m not sure if it’s still happening, but at some point, the controls in the spaceX shuttles were touchscreens running javascript. It brings me joy (and obviously fear!) to imagine an npm node_modules directory makings it to space… I think the consensus is spot on that there can absolutely be too many touchscreens.
More fundamentally your answer doesn’t really speak to what I was thinking of, though: general computing. Buttons and switches are great for purpose-built machines, but the applicability to general computing is minimal. How could one possibly fundamentally expound upon the keyboard or mouse?
> a dynamic medium we can see, feel and manipulate
I wonder if this is a "pick 2" situation.
Touchscreen: see and sorta manipulate
Orion: manipulate (in only the weak sense that it uses more than one finger)
Braille display, haptic glove/suit: feel and see (redundantly?)
In my touchscreen-biased mind, I can't imagine a technology that could do all three of these, which I would want to use regularly or carry around, or wear for hours at a time.
The touchscreen, combining the "see" with at least a tiny bit of the "manipulate", is actually an amazing step, but it may be a dead end. I've seen research into things like tactile screens, with adjustable surface height. But the benefit it gives is evidently not worth the cost in development, manufacturing, or complexity.
Thanks for taking the time, really thought provoking… Will definitely be kicking that around in my head. No counterpoints.
I was going to correct you that Orion is just a baby step towards Minority Report holograms (aka plain ol’ non-tangible holograms), and that reminded me: the famous opening scene had lots more than holograms!
Specifically, they had physical objects (harddrives, knobs, spheres, etc) that didn’t seem to have much electronics actually in them, but that were tracked and responded-to by the computing system in a dynamic way. For example, easily passing data between the hologram and a little ball that you can easily move to another display, give to someone else, etc. in a completely seamless (read: AI-managed) way.
That might be my biggest hope for the future, while we’re waiting on tactile gloves. This kind of thing: https://www.behance.net/gallery/105147921/Apple-x-Procreate-...
Counterpoint to myself: Accelerometer-based inputs might demonstrate that I'm wrong.
In the Zelda games on the Switch, you aim the bow and arrow with motion controls. This may be the smoothest, most natural interface I can think of; changing back to joystick controls feels really bad.
The input is 2DOF manipulation, and the feedback is primarily seen on the screen. There is a small amount of "feel" feedback inherent to the physical motion of the controller.
Another example on the Switch is a "labyrinth" balance minigame in Kirby.
I recall early iPhone games exploring this UI, but it doesn't seem to have developed into anything except games and levelling utility apps.
> Maybe I’m really stuck in the present, but short of tangible holograms I’m not sure how we’d make tactile computing devices.
>>> what I was thinking of [...] How could one possibly fundamentally expound upon the keyboard or mouse?
You're wearing a VR headset, so the visual UX of keyboard and mouse can be 3D anything. The keyboard and surround is all multitouch. The keyboard is n-key rollover - feel free to chord. The mouse is tracked in 3D. It's also on a motorized arm, so it has force feedback, and you can leave it wherever, or have it move itself. You have two of them. Your hands are tracked in 3D, so feel free to gesture, and to interact with the visual environment. If you value the fingertip feel of your keyboard, you can stick with fingernail vibrators. The mouse surface is variously pressure sensitive. The mouse surface is vibrator chips. The keys are covered with vibrators. The keys are multi-axis pressure sensitive. Your palm and back of hand are covered with vibrators. The headset provides a high-resolution soundscape. The headset tracks head and gaze. Any screen is notional, as the headset provides high resolution content with any apparent focus depth. The keyboard is on a robot arm, so you can stand or pace - just ask, and it will be at hand. For some tasks, like sorting piles of objects, you might briefly prefer a different interface device.
Much of that is current-ish tech. Though painful to gather and make comfortable, given a dysfunctional market for such.
Contrast the available design richness of phone vs desktop, interface and apps. Imagine desktop absorbing lessons where phone is better. Now try to imagine the design richness available to desktop vs something, where something isn't pretending to be a half-century old terminal emulating a manual office typewriter.