Using Sensors to Understand User Contexts

Market analysts now project that five billion MEMS sensors will be shipped in 2016 to support applications like navigation, dead reckoning, image stabilization, and augmented reality in smart phones, e-readers, tablets and gaming platforms. Although these applications are all extremely useful, we think they represent only a fraction of the functions sensors will perform. After all, most consumers don’t need directions, take pictures, or play games more than a few hours a day. But sensors, and intelligent algorithms, will be working all the time to help applications understand user contexts.

Today, smart phones and tablets use sensors to understand user context in a few primitive ways. Turn the tablet from portrait to landscape orientation and the content of the display reorganizes to try and fit the display. Bring the smart phone to your ear and the touch screen turns off (OK, many phones still needs to work on that one). But with more sophisticated algorithms and heuristics, the sensors can do much more.

How about a smart sensor system that knows when you are getting in or getting out of a car? For starters, users can send all incoming calls, except those coming from their families, to voicemail while they are driving. Then there are those “car finder” apps today that can bring a driver back to his car if he starts the app after he has parked. We have been suspicious of the utility of such an app: if we had the presence of mind to start an app when we left the car, we would probably be able remember where we has left it without any navigation aid. So it would be more useful if smart sensors automatically trigger the navigation system to remember the location where we got out of our car for those absent minded moments we all have.

A new smart phone now contains two cameras, an accelerometer, a magnetometer, a gyroscope, a proximity sensor, a light sensor, and two or more microphones. These sensors capture a huge amount of data that can be used to inform and entertain consumers. At the same time, these data also capture the reality surrounding the users. Applications running on the phone can process the data and mine for information that help them adjust their configurations automatically to better match where the users are and what they are doing.

Concerns for privacy notwithstanding, consumers do look forward to smart phones that can become truly smart, mind-reading, assistants. The first step towards that is having smart phones that can automatically infer user context. Kenneth Noland, the America abstract painter said it well, “context is the key – from that comes the understanding of everything.”

 

On July 11th 2014, Audience completed the acquisition of Sensor Platforms. Audience believes the combination of its Advanced Voice and Multisensory Processing with Sensor Platforms’ technology places the combined company in a unique position to deliver compelling solutions based on the fusion of voice and motion.

Multisensory user interface with context awareness will bring fresh differentiation and end-user value to the market for smartphones, wearables and other mobile devices. Sensor Platforms has developed key technologies and software infrastructure in this space, and the combination of our engineering teams will enable us to rapidly scale our capabilities in context awareness-based user interface.

Audience welcomes the Sensor Platforms team and thanks all of its partners for their continued support during this transition.