Simultaneous Localization and Mapping (SLAM) has been there for quite a while, but it has gained much popularity with the recent advent of Autonomous Navigation and self-driving cars. SLAM is like a perception that aids a robot/device to find it's relative position in an unknown environment. Applications of which extend from augmented reality, to virtual reality, to indoor navigation, and Autonomous vehicles.
For implementing SLAM using a perceptive device, the algorithm assumes that we have the controls of the position of the camera (6 DoF: forward, backward, up, down, left, and right). If the camera is mounted in a vehicle(that navigate on land), usually the up and down movements are minimal(and can be neglected for plain roads). The visual SLAM uses some sort of point descriptors for both generation and localization. These point descriptors can contain a camera video stream, IR readings, Li-DAR depth estimates, etc. Based on these values, the algorithm generates a map of the environment. The algorithm then makes use of Kalmann-filter algorithm, where we take observations of known values over time and try to calculate the unknown variable(dependent on the known) values.In our case, we have a monocular camera as a sensor that streams the video feed to the server, where the computation takes place. Based on the video feed that the server receives, various feature points are detected and a descriptor dictionary/JSON is created that stores various information that looks something like this:
Key_point = {(x,y): {'Orientation': value, "relative distance':value, 'color': value, ...}
Once this is created, we have the known variables to process the unknown variable( Reconstruction of the map). Due to the hardware limitations(ran this on a surface pro 6), the project makes use of a Sparse ORB-SLAM algorithm instead of a dense one. This doesn't compromise the efficiency(for our test case) of the algorithm by much though. Let's take a quick look at what an ORB feature mapping is...
ORB Feature Mapping:
Oriented FAST and Rotated BRIEF(ORB) is a scaling invariant, rotation invariant, one-shot feature detection algorithm. As its name suggests, it is relatively fast and doesn't require a GPU for computation. The algorithm computes the key-points of a given train image and maps it with the test image. These key-points could be the pixel intensity, edges-detected, or any other distinctive regions in an image. These key-points are then matched based on the nearest 4 pixels(as opposed to 16 nearest neighbor matching used in SURF). It also up-scales and down-scales the training image to make the feature detection scale-invariant. Due to it being computationally inexpensive in nature, this algorithm can be used in CPU and even mobile devices. When running the algorithm on our dashcam video, we get something like this:
![]() |
Methodology:
The application begins with calibrating the camera and setting the camera intrinsic for optimization. It makes use of OpenCV's ORB feature mapping function for key-point extraction. Lowe's ratio test is used for mapping the key-points. Each detected key-point from the image at '(t-1)' interval is matched with a number of key-points from the 't' interval image. The key-points with the least distance are kept based on several generated. Lowe's test checks that the two distances are sufficiently different. If they are not, then the key-point is eliminated and will not be used for further calculations. For 2D video visualization, I had a couple of choices: OpenCV, SDL2, PyGame, Kivy, Matplotlib, etc.
Turns out OpenCV's imshow function might not be the best choice. It took ages for OpenCV to imshow a 720p video with all our computations. The application made use of SDL2, matplolib, and kivy's video playing libraries but PyGame was outperformed all of them. Thus, I used PyGame for visualizing the detected keypoints and various other information such as orientation, direction, and speed.
![]() |
1. Supports python and it's opensource!
2. Uses simple OpenGL at its fundamental form
3. Provides Modularized 3D visualization
For implementing a graph-based non-linear error function, the project leverages the python wrapper of the G2O library. G2O is an open-source optimization library that helps reduce the Gaussian Noise from nonlinear least-squares problems such as SLAM.
Results:
The implemented algorithm provides a good framework for testing and visualizing a 3D reconstruction of the environment based on Monocular ORB-driven SLAM. Being pythonic in nature, this implementation is not suitable for Real-Time visualization. We already have frameworks such as ORB-SLAM2 and OpenVSLAM for a c++ implementation of the algorithm. Said that, here are some demos for the algorithm:
You can find the full code here in my GitHub.
0 comments:
Post a Comment