Animoji Trains Future Interaction Interface
In the September 2017 Apple iPhone 8 and iPhone X announcement Keynote they demonstrated the Apple ARKit driven face emoji, Amimoji. This is similar to other platform’s and service’s offerings.
But, there is something I think a little different about what Apple is doing. One piece is the face identification system that Apple has in the iPhone X and the 30,000 dots it uses on people faces to ascertain an identity, which makes it difficult for someone to use a photo or mask of a person to gain access. The other piece is people interacting with their screens and the live face scans of muscles and facial features moving.
It is this second piece, the live interaction where I have a strong hunch Apple is seeding things for the future. People are learning to interact with a facial scan interface for fun and learning the capabilities so to be comfortable with it. This seems very similar to Microsoft’s using the Solitaire game in early Windows to provide a fun interaction as a means to train people how to use a mouse and get comfortable with it.
Look out a few years and start to see not Animoji, but people talking to Siri to bring up an app on their wrist, car heads-up display, or (rather banal) iPhone and use facial interactions to swipe, scroll, sort, etc. feature options and light contextual information options for simple / calm interfaces. A raise of the eyebrow could scroll up options, a side smile left moves to preferred options and side smile right moves to next screen.
I know nothing other than a hunch, but playing around with this idea for years, I’m seeing the potential could be around the corner. Finally. Maybe. Come on Apple, lets take that step.