Blog
From Rocket Science to Mobile Handsets: The Basics of Real Implementation of Sensor Fusion
Posted on June 1, 2012
Sensor fusion has become a popular topic for designers integrating inertial sensors into smartphones and tablets. The idea of sensor fusion is to take measurements from different sensors to estimate the internal state of a system. In the case of the smartphone, we use inertial sensors (accelerometers, magnetometers, gyroscopes, and barometers) which do not directly measure orientation or position, to compute the orientation and position of the mobile device.
Sensor fusion, however, is as old as the space program.
The most practiced approach to this estimation problem was first proposed by Rudolf E. Kalman in a paper published in 1960, the so-called Kalman filter. It is an optimal, recursive Bayesian estimator. “Optimal” means that it produces the best estimate of motion given the noise characteristics of the sensors being used. But, the true power of the Kalman filter is that it provides a framework to easily model and track the non-ideal properties of a system such as sensor biases and environmental anomalies and remove their harmful effect. Simpler but sub-optimal approaches to estimation, such as complementary filters, can’t easily track these things.
Kalman filters saw immediate application in navigational guidance systems for the Apollo program in the late 1960s. Notably, an extended Kalman filter (EKF) made it possible for Earth based Doppler radars to track the lunar module of Apollo 11 so it could land on the moon. Since then, more complicated and computationally intensive kinds of optimal estimators have been developed, and but Kalman filters remain the most popular way to track motion because of their relatively lower CPU overhead for navigation applications.
Today, Kalman filters are a normal part of aerospace and electrical engineering curricula. While the method for designing a Kalman filter is no longer novel, useful implementations of Kalman filters remain surprisingly difficult: Kalman filters have a large number of knobs to turn, and designers have a large number of possible models to approximate a complicated real-world system. The challenge, then, becomes finding the right set of knobs and model.
Fundamentally, Kalman filters must be created by iterating on a design with a validated test process. Since a good test process requires real world data in a wide variety of locations and situations, developing this test process and test database make up the bulk of the design effort.
Another difficulty lies in creating an efficient implementation. This is in part due to the lack of a floating point unit on most embedded platforms. To be efficient, a Kalman filter must be implemented using fixed point arithmetic, just like the Apollo Kalman filters! Even using fixed point math, efficiency still means choosing the simplest model possible. Computational complexity of a Kalman filter increases proportionally with n3, where n is the number of model parameters. So, even a reduction from 10 to 9 parameters will reduce CPU usage by approximately 30%.
In the tradition of the best technology that disappears into the background, the innovation of Rudolf Kalman is embedded in everyday consumer devices. The technology that enabled the first lunar landing is now part of a checklist in every smartphone. But to create a Kalman filter that works well, the devil is in the details.