Gestures and context are getting a lot of attention in the sensor industry recently, and although they are related, there are important distinctions.
- Gestures are defined as a form of non-verbal communication based on an action or movement. They are instantaneous and self-contained, for example a hand-wave.
- Context is defined as the set of circumstances surrounding a particular event or situation. It takes advantage of historical information not always described by gestures. For example distinguishing that the hand-wave is someone waving goodbye at a train station.
Although motion sensors can be used to identify both gestures and contexts, the techniques needed are different. Gesture algorithms often use sensor fusion to match a 3d trajectory or a deterministic pattern. Use of gestures such as “shake to undo” on the iPhone can lead to a poor user experience. Learning these artificial gestures are ad-hoc and false positives are frustrating to a user.
A context aware platform takes in more of the situation to better understand user motion in a natural way. The foundation is a good mechanism to encompass the multitude of variations in signals which does not rely on a user learning prescribed gestures. Still, gestures can assist in indicating a change of context. For example, standing up from a chair is a type of natural gesture and could point to the Posture context of standing or walking. Taking a phone out of a pocket is another natural gesture and could point to the Carry context of being held in hand or placed on a table.
Our FreeMotion Library incorporates a very power-efficient architecture to determine the underlying context. By utilizing low-power sensors and efficient algorithms, we are enabling always-on mobile platforms which will better understand the user’s intent.