VSLAM Playground
This is a project where I experiment with modern deep learning-based components to build a full
Visual SLAM pipeline.The project will be using KITTI dataset initially, but the pipeline is meant to be working
real-time on autonomous platforms.
The project compares different new feature detectors/descriptors such as SuperPoint and DISK,
in combination with some feature matchers like SuperGlue and LightGlue.
It also compares different motion estimation techniques in monocular and stereo visual odometry such as essential matrix estimation
or the PnP (Prespective N Point) algorithms. Depth is created from LiDAR scans by projecting the scanned points onto an image plane
and some deep stereo depth estimation networks such as HITNet.
The code is open-source and available on GitHub. I am also writing blogs about Visual Odometry and SLAM while working on this project.