ready-to-hand
As we move into the era of “natural user interfaces” (NUI), I think designers struggle with what it means for a digital interface to be natural. Touching, gesturing, and speaking might seem to be natural because it’s what we do in everyday life, but the devil is in the details.
Gesturing at large displays à la Minority Report causes extreme arm fatigue (aka “Gorilla arm“). Touch can be cool, but often it’s simply a new way to operate a standard graphical user interface (GUI); rarely yet is it truly direct manipulation of digital objects outside the desktop metaphor. Using speech to interact with your phone or computer is a nice idea, but if you are reduced to talking like a robot, that’s not natural–that’s you adapting to the limitations of technology. (To me this is more humiliating than subserviently clicking or typing, speaking like a machine, trying to figure out what the machine will understand.)
Even if touch, gesture, or speech works for what you’re doing, we don’t yet have many systems that support multiple modalities as we’d expect, for example, to point and say, “hey, would you move that blue folder over there?” We’re really just at the beginning; we’re not even close to getting it right yet.
It gets even more interesting (and challenging) in the wearable computing space, where often you don’t have any display or surface to act upon.
Jason Palvus at MIT Technology Review recently shared some very insightful thoughts about Google Glass UI in his post, Your Body Does Not Want To Be An Interface. He introduces the difference between ready-to-hand (something you use that becomes an extension of you when you use it), and present-at-hand (something that you have to act through to use, that mediates your direct action). The interface for Glass (so far) uses seemingly natural gestures or actions (staring, nodding, winking) to issue specific commands to the Glass system, effectively making your body the place where the UI happens.
The assumption driving these kinds of design speculations is that if you embed the interface–the control surface for a technology–into our own bodily envelope, that interface will “disappear”: the technology will cease to be a separate “thing” and simply become part of that envelope. The trouble is that unlike technology, your body isn’t something you “interface” with in the first place. You’re not a little homunculus “in” your body, “driving” it around, looking out Terminator-style “through” your eyes. Your body isn’t a tool for delivering your experience: it is your experience. Merging the body with a technological control surface doesn’t magically transform the act of manipulating that surface into bodily experience. I’m not a cyborg (yet) so I can’t be sure, but I suspect the effect is more the opposite: alienating you from the direct bodily experiences you already have by turning them into technological interfaces to be manipulated.
Indeed. The article continues to explore this issue in a direct and interesting way I haven’t seen before.
I do think we adapt to interfaces so they seem to disappear, if the interface gives us enough benefit, such as not thinking about what letters to write when forming words with a pen, or a guitarist who doesn’t think about finger placement when playing a song. But when we’re driving an interface while doing other things, in the background, and if we really want to get the benefit of computing just by being ourselves, we have a long way to go. I don’t know if we will ever get to the lifestyle portrayed in Vernor Vinge’s Rainbow’s End, nor do I think we want to…
We want our technology to be ready-to-hand: we want to act through it, not on it. And our bodies don’t have to become marionettes to that technology. If anything, it should be the other way around.
Well said.