The Now, The New, and The Next of Alternative UIs

by Srikar Kalvakolanu

Mobile, Voice, VR, and Neuro

One of my favorite things to do in my free time is to play around with my Amazon Echo (Alexa). Over the past 2.5 years (I was an early adopter), this thing has transitioned from a decent bluetooth speaker to an all out home automation unit that can even call an Uber. The sheer growth of the product is fairly incredible and it all started with a small bet from Amazon that seems to be knocking on the door of becoming the new normal.

2.5 years ago, voice UIs seemed like a gimmick to tell your Echo to do 1 of 5 seemingly useful things, but as the development around the Alexa platform and the arrival of other products such as Google Home and, more recently, Bixby by Samsung (formerly Viv), it seems as though voice may be the new computer-human UI that people are latching onto.

At the same time, other alternative UIs are popping up and are becoming more tenable to new products, including VR and even Brain-Comupter Interfacing. So, it’s time to discuss briefly about some of the interfaces that exist now and some of the up and coming ones in the next years.

The Now

  1. Mobile — Probably the most ubiquitous out of the “new” platforms, mobile has really taken a stranglehold of the UI experience. While not exactly 100% standard yet, mobile is now a main mechanism of interfacing with a ton of different applications (think Pokemon Go, Instagram, Snapchat, etc.), and a popular channel for many others (think Facebook, Expensify, Amazon, etc.). Mobile has come to the point of being an expectation due to the shift in preferences amongst consumers to view data in that way.
  2. Desktop — Native desktop apps are becoming a bit of a dying trend with the rise of the subscription economy and platforms like SaaS. However, some of the most common applications we use today still have a desktop-based application (Microsoft Office, Spotify, Slack, web browsers, etc.). Many other desktop applications have gone online to create better experiences and content. Internet connectivity makes products better and more portable, leading me to the final category…
  3. Desktop Browser — Browser-based UIs have been around for a while, but have been going through a bit of an innovative period in the past 10–15 years with the rise of browser-based experiences (think about how you watch Netflix on your laptop or how you shop on Amazon). This is a large category of UIs that still exists today and is important to keep track of.

The New

  1. Voice — Alexa is the prominent pusher of this technology currently. Siri never really pushed the voice platform to its popularity as much as Alexa did. And now, voice interfacing is seen as the new big thing in product design. Voice is a new mechanism of input to create an output. The main thing here is creating the NLP engine to move voice to text and also creating understanding of the text to input commands. We’re still at the early stages here but ground is being made quickly. Soon instead of saying something like Alexa, lights off, or designating specific names, Alexa can understand context to actually do things for you without exact descriptions or specific commands voiced (see AI in The Next). Voice can sometimes be gimmicky and finicky, but it’s still making it easier for people to get things done (in the right contexts).
  2. Augmented Reality — Augmented Reality is barely in the new category with really only a few companies really making special inroads here, but nonetheless, it is an interesting mechanism of interfacing. We had a Microsoft Hololens at the office and I found that augmented reality while cool and having a ton of potential and making my job a bit easier by giving me a few virtual screens to look at, it never really was my preferred mechanism of consuming information. That’s currently the biggest problem with AR. The tools and mechanisms to bring AR in the world are often clunky, expensive, and delicate, and don’t actually provide a meaningful layer of information over the real world. I found the apps I used the most with the Hololens was Space Invaders, Excel, and YouTube. None of these apps really offered anything that interacted with the real-world. It will be interesting to see AR coming to phones to make it more accessible and to see if the content that is provided truly can be useful.
  3. Virtual Reality — Virtual Reality is also barely in the new category and sheerly due to the amount of funding and press it has received, it has been pushed into The New. VR still lacks the few iconic use cases that make it a must have for many products, but it is getting closer. Video Games have been pushing the technology further and it is finally having significant non-video game applications like travel, and education. VR suffers similar problems as AR with the lack of high fidelity low cost options (Google Cardboard is nice, but hasn’t really pushed the fold to the level Oculus Rift, HTC Vive and others have). VR also has some unique UI issues due to its fully immersive experiences. Things like movement and orientation are still not at the level that people can have 100% VR experiences for extended periods of time. As the technology progresses, it should be more useful and more accessible just like AR.

The Next

  1. Gesture — This category is coming closer to becoming the now and has flashed itself a bit in the Augmented Reality space. Specifically, Microsoft Hololens uses gestures like “bloom” and “point and click” to navigate the interface. Gestures in general have a long way to go before being native to experiences just due to the fact that we arent used to using them. Project Soli by Google ATAP is a great example of the future of these innovations to provide a new way to interface with products. Ultimately this will rely on the ability to create meaningful gestures and mechanism to capture and use them for input.
  2. Neuro — Neuro is starting to heat up right now with Facebook announcing a Brain-Computer Interfacing device (aka mind-to-text) and Elon Musk discussing NeuraLink. Now while this sounds cool—and it may not be as far off as we think—the idea of Neural reading has existed for quite a while and we are finally finding new and interesting ways to use this technology in products in order to benefit society.
  3. Full Automation/AI — Imagine a world where you don’t even have to think things, things just happen: You go on a lunch for business and you don’t have to classify your expenses or input information because a machine somewhere knows enough about you and has enough context to automatically do it and do it correctly. Notes are automatically taken for you in your CRM. And even further than that, your favorite meal is ready for you when you come home and is made automatically by your 3-D printing oven. This is a ways away, but we are starting to see some of this stuff happen. In the manufacturing space, the creation of an aluminum can is so refined and automated that it’s incredible (this video blew my mind).

UIs are critical to new products and the progression of technology and new UIs are popping up all the time (3D Touch on the iPhone and Edge Sense on the new HTC U 11 are great examples). Keeping on top of UIs and having strategies for effectively using them is important for every business to maximize its product lifecycle.