Activity recognition, activity monitoring, gestures, and context awareness are all popular terms in the sensor industry, but the connection between these is rarely discussed. Upon more thought, there is a clear distinction between four levels of sensor interpretation for a mobile device as shown in the figure.
- Element: single motion such as a gesture or step, which happens on the order of a second,
- Context: pattern of motion such as walking or in-car, which happens on the order of one to ten seconds,
- Activity: class of motion composed of multiple contexts stringed together, which happens on the order of minutes,
- Intent: meaning of motion, which can be determined based on the recognized activity plus supporting information such as user habits, time, calendar, and absolute position.
For example, most activity monitoring devices are at the element level: counting steps like a pedometer. The next level of interpretation is the context: looks at the pattern of steps and determines the user is running. Over time other contexts such as pausing to drink water or waiting in place could be used to recognize the activity is actually jogging. Finally the highest level, a system could determine the user intent: whether the user is jogging for exercise or is late to a meeting.
A few other examples are shown in the table below.
|Putting to ear||At-ear||In phone call||Booking a reservation|
|Leg movement||Pedaling, on-bicycle||Biking||Commuting to work|
|Step||Running||Rushing||Late for a meeting|
Understanding this distinction assists in determining the appropriate system partitioning between embedded and the operating system. Utilizing our FreeMotion(TM) library, context awareness is suitable for hardware acceleration. This can provide low-power always-on context awareness to an operating system, which in turn can determine the overarching activity and feed into an inference system such as an IFTTT service to deduce the user intent.