The death of the UI…?

Based on the presentation given at UPA 2012 this June.

I have spent a lot of time over the past month looking at alternate methods of interaction, different ways people can engage with technology other than the traditional mouse/keyboard paradigm. This focus was triggered by watching demo videos of The Leap, a new computer peripheral coming in November that provides gesture control that makes the Kinect look like a kindergarden toy (go ahead, google it and watch – I’ll still be here when you get back).

The videos are amazing, with gesture recognition down to the centimeter level, and it definitely The Leap is a significant… err, step forward. We are finally at the point that we saw years ago in the movie Minority Report, when the user controlled and engaged with the computer without touching a thing. The future will soon be here, faster than expected.

Gestural interaction is finally coming into its own, but what about voice? We have had some form of “voice control” of our computers for years, starting with Dragon dictation software over fifteen years ago… and then after that we had interactive voice systems for automated services such as banking, as well as assistive technology for the visually impaired computer user. The thing is, until this year these systems were not very “smart”… they had preset options and didn’t “learn” very well (or at all).

All this changed with the release of Siri for the iPhone last year, when Apple rolled out a vert “smart” personal assistant, with advanced artificial intelligence beyond any that has ever been in any consumer device. Majel, a similar system being worked on by Google for its Android platform, is also reported to have very good AI. The next version of Siri, just announced at Apple’s World Wide Developers Conference, looks to have even more advanced features and functions. An already pretty good system is getting better.

So, with gesture and voice becoming more and more powerful and available to customers in multiple platform, will these new interaction methods mean the death of the UI? Are user experience designers going to be out of a job in a few years? No… but we have to adapt. As design professionals we have to not only get used to this technology, but know how to design experiences around and using this technology.

When looking at a design problem, very soon it will be possible to design your solutions to support voice and/or gesture as alternate controls… Or make them the primary way to interact. There may be use cases where voice control or gesture is the optimal interaction method… or it may not work at all. We have to think about factors such as reliability, ease of use, and context… All inform the decision as to how we want users to engage with our designs.

A quick example: Recording a TV show. The current process works like this: The user picks up the remote, browses the channel listing finds the show, clicks the record button when they find “Breaking Bad” in the onscreen listing. The TV UI “refreshes” to show an icon next to the show to be recorded, confirming to the user that his or her choice was set. A potential process in the future could go something like this: User says “TV, record the new Breaking Bad,” and the TV replies “It’s set to record on Wednesday at 9.”

The example I cite is using voice, but you can easily create another example using gestural interaction – the user could swipe away at shows on a TV listing and move the preferred one to a “record” target area. Or you could look at something like a computer aided drafting program, where instead of using a mouse and pointer to smooth or stretch an object, the system could recognize the user’s own fingers as they “mold” the object. And so on…

In closing, we should always focus on designing the experience, not a UI. We need to be ready to embrace and use these new interaction methods because they may (and probably will) adapt them FAST. We need to be ready.

Comments are closed.