What is SLAM?
How does it work, and what does it mean for mobile 3D mapping?
Simultaneous localization and mapping (SLAM) is not a specific software application, or even one single algorithm. SLAM is a broad term for a technological process, developed in the 1980s, that enabled robots to navigate autonomously through new environments without a map.
Autonomous navigation requires locating the machine in the environment while simultaneously generating a map of that environment. It’s very difficult to accomplish, because the machine needs to have a map of the environment to estimate its own location. But to generate the map, it needs to know its own location.
As a result of this never-ending circle of dependencies, SLAM was sometimes called a “chicken or egg” problem.
How does SLAM work?
There are many approaches to SLAM. Luckily, we can still make some generalizations to demonstrate the basic idea.
Here’s a very simplified explanation: When the robot starts up, the SLAM technology fuses data from the robot’s onboard sensors, and then processes it using computer vision algorithms to “recognize” features in the surrounding environment. This enables the SLAM to build a rough map, as well as make an initial estimate of the robot's position.
When the robot moves, the SLAM takes that initial position estimate, collects new data from the system’s on-board sensors, and makes a new (and improved) position estimate. Once that new position estimate is known the map is updated in turn, which completes the cycle.
By repeating these steps continuously, the SLAM tracks the robot's path as it moves through the asset. At the same time, it builds a detailed map.