Are intelligent “bespoke” interfaces the future of computing?

I have some bad news for my colleagues in the user experience profession who design interfaces. In about twenty years, or less… you may be out of a job.

Why do I make such a bold statement? Because I have been thinking about how software design and technology has evolved over the years, and I have a theory about how developments over the past thirty years are leading us to a rather interesting future.

We have seen computer interfaces change dramatically in three decades – from text based UIs and primitive graphical user interfaces, all the way to animated interactive UIs that support voice interaction. What is next? I think the answer comes from looking at four factors:

First, consider how interfaces have evolved. The current design trend is to support simple tasks more than complicated workflows… This has been driven by the rise in mobile computing and the limited screen size, but it has also been influenced by the need for applications to have a minimal learning curve to increase adoption and use.

Second, look at what we have learned and how user experience design has matured. As design has evolved best practices has been defined, as well as design patterns and libraries that can be leveraged. Designers no longer have to “start from scratch,” they can leverage these patterns and libraries to create effective designs quickly.

The third factor is computing power. We are seeing the computing power of all devices increase at a pace that is astonishing. We are carrying mobile devices in our pockets that would be considered the stuff of science fiction just a decade ago, and the pace of innovation doesn’t appear to be slowing.

Finally, consider personalization. Many interfaces surface relevant and important information based on the user’s previous activity and (perceived) interests. Google, Amazon, and Windows 8 all delivery these type of UIs (though in different ways and in varying degrees).

Combine all these factors and extrapolating the trends, I see future user interfaces being “intelligent” ones. These “bespoke” interfaces will be self-defining and self-creating, while following best practices and standards.

Imagine applications where UIs will proactively provide options and information before the user even knows they need it. The UIs will self-adjust based on usage, with more frequently accessed functionality being surfaced and rarely accessed functions will be “buried” in a secondary screen.

UI designers, don’t worry… there is still plenty of things to do. Frameworks and rules (based on best practices) will need to be defined, guidelines around things like how large and prominently positioned primary controls should be. Someone will still have to define the “default” layout and display… though you won’t be designing all the screens.  The actual layout will come from the computing device monitoring usage and “shifting” the UI to reflect actual usage.

Will some users freak out when the UI changes on them, outside of their control? Yes, and a fine balance will need to be set between supporting user control and freedom and a self-designing optimized experience. But users will adapt, especially when the benefits become obvious and enough technology providers adopt the practice.

The impacts of bespoke interfaces are huge. How do you support a million users when their interfaces all look and work differently? How do you roll out new functionality? What about devices that are shared by multiple users? I don’t have the answers to these questions, but I can see lots of possibilities. And the idea that my personal technology would be completely aligned to what I did and needed with very little effort on my part… as a user, that makes be very excited indeed.

UPDATE: Apple was recently granted a major patent for an advanced “sensor-based UI” that changes based on various factors. Hmm.