Monday, November 14th, 2016
User Interface (UI) defines the interaction between humans and machines. On the other hand, Machine Learning is a branch of Artificial Intelligence (AI) that designs and develops algorithms that take as input empirical data, such as that from sensors or databases, and yields patterns or predictions thought to be features of the underlying mechanism that generated the data.
Over the past 30 years, as every facet of our lives has migrated onto computer screens, designers have focused on perfecting user interfaces—placing a button in just the right place for a camera trigger or collapsing the entire payment process into a series of swipes and taps. But in the coming era of ubiquitous sensors and miniaturized mobile computing, our digital interactions won’t take place simply on screens. They will happen all around us, constantly, as we go about our day. Designers will be creating not products or interfaces but experiences, a million invisible transactions.
The next challenge for experience design is to create a constellation of devices, including wearable gadgets, tablets, phones, and smart appliances, that can coordinate with one another and adapt to users’ changing needs. If all our devices interacted so cooperatively, whole new possibilities would begin to emerge.
“AI is the new UI.” That is, the effort and attention that designers once poured into interfaces should be extended to code that doesn’t just react to the push of a button but anticipates your actions. In the next generation of consumer hardware the UI options will be limited but the consumer use cases will be quite varied.
Summing-up: Eliminating complexity in the User Interface is a really, really big deal. It’s why many people love Apple products. And Machine Learning is the next natural evolution in reducing UI complexity. We’ll start to see Machine Learning show up in lots of consumer interfaces that figure out for themselves what the user is trying to achieve.