Towards the Meaning of Motion

Activity recognition, activity monitoring, gestures, and context awareness are all popular terms in the sensor industry, but the connection between these is rarely discussed. Upon more thought, there is a clear distinction between four levels of sensor interpretation for a mobile device as shown in the figure.

  • Element: single motion such as a gesture or step, which happens on the order of a second,
  • Context: pattern of motion such as walking or in-car, which happens on the order of one to ten seconds,
  • Activity: class of motion composed of multiple contexts stringed together, which happens on the order of minutes,
  • Intent: meaning of motion, which can be determined based on the recognized activity plus supporting information such as user habits, time, calendar, and absolute position.

For example, most activity monitoring devices are at the element level: counting steps like a pedometer. The next level of interpretation is the context: looks at the pattern of steps and determines the user is running. Over time other contexts such as pausing to drink water or waiting in place could be used to recognize the activity is actually jogging. Finally the highest level, a system could determine the user intent: whether the user is jogging for exercise or is late to a meeting.

A few other examples are shown in the table below.

Element Context Activity Intent
Putting to ear At-ear In phone call Booking a reservation
Leg movement Pedaling, on-bicycle Biking Commuting to work
Step Running Rushing Late for a meeting

Understanding this distinction assists in determining the appropriate system partitioning between embedded and the operating system. Utilizing our FreeMotion(TM) library, context awareness is suitable for hardware acceleration. This can provide low-power always-on context awareness to an operating system, which in turn can determine the overarching activity and feed into an inference system such as an IFTTT service to deduce the user intent.

On July 11th 2014, Audience completed the acquisition of Sensor Platforms. Audience believes the combination of its Advanced Voice and Multisensory Processing with Sensor Platforms’ technology places the combined company in a unique position to deliver compelling solutions based on the fusion of voice and motion.

Multisensory user interface with context awareness will bring fresh differentiation and end-user value to the market for smartphones, wearables and other mobile devices. Sensor Platforms has developed key technologies and software infrastructure in this space, and the combination of our engineering teams will enable us to rapidly scale our capabilities in context awareness-based user interface.

Audience welcomes the Sensor Platforms team and thanks all of its partners for their continued support during this transition.